Professional Cloud Developer Exam
Professional Cloud Developer Exam
Prepare for your Professional Cloud Developer exam with additional products
Study Guide
$19.99
Buy Now
Video Course
252 Lectures
$19.99
Buy Now
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 1/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #1 Topic 1
You want to upload files from an on-premises virtual machine to Google Cloud Storage as part of a data migration. These files will be consumed
by Cloud
Correct Answer: A
The gsutil cp command allows you to copy data between your local file. storage. boto files generated by running "gsutil config"
i took up the exam yesterday. not a single question from here. i failed the exam. please do not waste your time here. Moderator doesnot approve
this real review also.
upvoted 7 times
-- My comment above is still waiting approval after two weeks, it seems someone doesn't want it to be published.
upvoted 5 times
I received my certificate on 19th NOV 2023 passed with 88%. Most of the questions are directly from here and pass4surehub.com. Thank you
ExamTopics and pass4surehub!!
upvoted 2 times
Selected Answer: A
Answer: A
Cloud Storage => gsutil. Doesn't matter how the files are going to be consumed, the task is to upload them.
upvoted 1 times
I would go with A.
upvoted 1 times
for sure A
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 2/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
Anyone took exam recently, what percent of questions came from examtopics?
upvoted 1 times
Selected Answer: A
Selected Answer: A
A is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 3/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #2 Topic 1
You migrated your applications to Google Cloud Platform and kept your existing monitoring platform. You now find that your notification system is
C. Use Stackdriver to capture and alert on logs, then ship them to your existing platform.
D. Migrate some traffic back to your old platform and perform AB testing on the two platforms concurrently.
Correct Answer: B
Reference:
https://cloud.google.com/monitoring/
The task does not indicate that we should get rid of the old software. The pain point is slowness for time critical problems only. Thus we would use
Stackdriver for the time critical alerts and still utilize the old platform for further analysis/storing of logs or whatever its business case is.
upvoted 17 times
Just took the exam and i passed with 88% , and surprisingly, about like a 50-60% came from here. and other questions are from pass4surehub, A
lot of kubernetes + microservices in GKE questions were asked. Thnaks to Examtopics and Pass4surehub.com
upvoted 1 times
Selected Answer: B
B is correct answer
upvoted 1 times
Notifications from on-prem monitoring system are too slow & applications are in GCP now => Stackdriver alerts on logs.
There's no mentioning whether the apps have been migrated to GCE, GKE, App Engine or Cloud Run, so "Compute Engine instances" come from an
assumption.
upvoted 2 times
Selected Answer: C
I would go with C.
upvoted 1 times
Selected Answer: C
C
you have problems with notifications.
C option allows you to use stackdriver to send alerts immediately and straight away after sends all this data to your on-prem monitoring platform
upvoted 3 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 4/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: C
Think twice. You have working an expensive monitoring system i.e Splunk and you have the problem with unacceptable delay time between
incident and notification. You need to fix this problem, not doing a revolution (changing monitoring system). You can leverage GCP Monitoring
with alerting system which is out-of-the-box with no huge effort, because if you want or not logs are in cloud logging. Simply implement alerts and
push logs to Splunk. Simples.
upvoted 3 times
A is correct
upvoted 2 times
Community choice is C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 5/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #3 Topic 1
You are planning to migrate a MySQL database to the managed Cloud SQL database for Google Cloud. You have Compute Engine virtual machine
instances that will connect with this Cloud SQL instance. You do not want to whitelist IPs for the Compute Engine instances to be able to access
Cloud SQL.
B. Whitelist a project to access Cloud SQL, and add Compute Engine instances in the whitelisted project.
C. Create a role in Cloud SQL that allows access to the database from external instances, and assign the Compute Engine instances to that
role.
D. Create a CloudSQL instance on one project. Create Compute engine instances in a different project. Create a VPN between these two
Correct Answer: C
Reference:
https://cloud.google.com/sql/docs/mysql/connect-external-app
The proposed answer seems incorrect, as according to the question application running access to Cloud SQL is run on the Compute Engine and the
are no roles in Cloud SQL itself to manage Instance-level access control. According to https://cloud.google.com/sql/docs/mysql/connect-compute-
engine there are 3 possible ways to connect from Compute Engine: 'Private IP', 'Public IP', 'Cloud SQL Proxy'.
There is no 'Cloud SQL Proxy' option in answers, 'Public IP' requires IP whitelisting what is unacceptable according to the question, so the only valid
answer is 'Private IP'
upvoted 25 times
Just took the exam and i passed with 88% , and surprisingly, about like a 50-60% came from here. and other questions are from pass4surehub, A
lot of kubernetes + microservices in GKE questions were asked. Thnaks to Examtopics and Pass4surehub.com
upvoted 1 times
Enabling private IP allows the Compute Engine instances and the Cloud SQL instance to communicate over a private, internal network within
Google Cloud Platform (GCP), rather than relying on external IP whitelisting.
upvoted 1 times
1. Private IP: You can use the private IP of the Cloud SQL instance to connect to it from the Compute Engine instance. This requires that the Cloud
SQL instance and the Compute Engine instance are in the same VPC network.
2. Public IP: You can use the public IP of the Cloud SQL instance to connect to it from the Compute Engine instance. This requires that the Cloud
SQL instance is configured to allow connections from the public IP of the Compute Engine instance.
upvoted 3 times
4. Cloud SQL Auth proxy Docker image: The Cloud SQL Auth proxy Docker image is a Docker image that contains the Cloud SQL Auth proxy.
You can use this Docker image to run the Cloud SQL Auth proxy in a Docker container on the Compute Engine instance. This allows you to
easily deploy and manage the Cloud SQL Auth proxy on the Compute Engine instance.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 6/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Create a VPC network: First, you need to create a VPC network in which the Cloud SQL instance and the Compute Engine instance will be
placed.
Create a Cloud SQL instance: Next, you need to create a Cloud SQL instance and specify the VPC network that you created in step 1 as the
network for the Cloud SQL instance.
Enable private IP: Finally, you can enable private IP on the Cloud SQL instance by going to the "Networking" tab in the Cloud SQL instance's
configuration page and selecting the "Private IP" option.
Once you have enabled private IP on the Cloud SQL instance, you can access it from the Compute Engine instance using the private IP of the
Cloud SQL instance.
Answer is A
upvoted 1 times
A is correct
upvoted 2 times
Selected Answer: A
The question is about "connection". Role assignment gives a set of permission to compute engine but doesn't allow connection.
upvoted 2 times
"...you can use the default Compute Engine service account associated with the Compute Engine instance. As with all accounts connecting to a
Cloud SQL instance, the service account must have the Cloud SQL > Client role."
upvoted 1 times
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 7/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 8/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #4 Topic 1
You have deployed an HTTP(s) Load Balancer with the gcloud commands shown below.
Health checks to port 80 on the Compute Engine virtual machine instance are failing and no traffic is sent to your instances. You want to resolve
the problem.
C. gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --source-ranges 130.211.0.0/22,35.191.0.0/16 --direction
INGRESS
D. gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --destination-ranges 130.211.0.0/22,35.191.0.0/16 --
direction EGRESS
Correct Answer: C
Reference:
https://cloud.google.com/vpc/docs/special-configurations
the source IP ranges for health checks (including legacy health checks if used for HTTP(S) Load Balancing) are:
35.191.0.0/16
130.211.0.0/22
Furthermore it should be direction INGRESS since the health-check (ping) is coming into the load balancer/instance.
source: https://cloud.google.com/load-balancing/docs/health-checks
upvoted 10 times
Just took the exam and i passed with 88% , and surprisingly, about like a 50-60% came from here. and other questions are from pass4surehub, A
lot of kubernetes + microservices in GKE questions were asked. Thnaks to Examtopics and Pass4surehub.com
upvoted 1 times
I would go with C.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 9/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Copy code
gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --source-ranges 130.211.0.0/22,35.191.0.0/16 --direction
INGRESS
This will create a firewall rule that allows incoming TCP traffic from the specified IP ranges to the Load Balancer network. This should allow traffic to
reach the instance group and the instances it contains.
Option A will not help because it is used to add an external IP address to an instance, which is not necessary for the Load Balancer to work. Option
B is not necessary because it is used to apply metadata to an instance, which is not related to the Load Balancer. Option D is not correct because it
allows outgoing traffic from the Load Balancer network, which is not necessary for the Load Balancer to work.
I hope this helps! Let me know if you have any other questions.
upvoted 2 times
Selected Answer: C
C is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 10/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #5 Topic 1
Your website is deployed on Compute Engine. Your marketing team wants to test conversion rates between 3 different website designs.
Correct Answer: A
Reference:
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
Ans:A
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
upvoted 11 times
Selected Answer: A
A is Correct.
upvoted 1 times
Selected Answer: A
you have a URL for each version deployed in the same service.
upvoted 2 times
A is correct
upvoted 2 times
A of course
upvoted 1 times
Selected Answer: A
I vote A but It could be wrong 'cos the question is not detailed. It doesn't ask to let url remain the same. So B could be a good answer: in this way
you have 3 url different without any need to manage splitting
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 11/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Answer is A
upvoted 1 times
If you want to test three different websites and compare how would you use traffic splitting (33%)
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 12/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #6 Topic 1
You need to copy directory local-scripts and all of its contents from your local workstation to a Compute Engine virtual machine instance.
C. gcloud compute scp --project ג€my-gcp-projectג€ --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone ג€us-east1-bג€
Correct Answer: C
Reference:
https://cloud.google.com/sdk/gcloud/reference/compute/copy-files
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
C because of scp
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 13/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #7 Topic 1
You are deploying your application to a Compute Engine virtual machine instance with the Stackdriver Monitoring Agent installed. Your application
is a unix process on the instance. You want to be alerted if the unix process has not run for at least 5 minutes. You are not able to change the
A. Uptime check
B. Process health
C. Metric absence
D. Metric threshold
Correct Answer: B
Reference:
https://cloud.google.com/monitoring/alerts/concepts-indepth
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
https://cloud.google.com/monitoring/alerts/policies-in-json#json-process-health
upvoted 1 times
B is correct
upvoted 1 times
A is wrong
Process-health policy
A process-health policy can notify you if the number of processes that match a pattern crosses a threshold. This can be used to tell you, for
example, that a process has stopped running.
source: https://cloud.google.com/monitoring/alerts/policies-in-json#json-process-health
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 14/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
Question #8 Topic 1
You have two tables in an ANSI-SQL compliant database with identical columns that you need to quickly combine into a single table, removing
Correct Answer: C
Reference:
https://www.techonthenet.com/sql/union_all.php
The only difference between Union and Union All is that Union All will not removes duplicate rows or records, instead, it just selects all the rows
from all the tables which meets the conditions of your specifics query and combines them into the result table.
upvoted 7 times
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 15/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Question #9 Topic 1
You have an application deployed in production. When a new version is deployed, some issues don't arise until the application receives traffic from
users in production. You want to reduce both the impact and the number of users affected.
A. Blue/green deployment
B. Canary deployment
C. Rolling deployment
D. Recreate deployment
Correct Answer: A
Reference:
https://thenewstack.io/deployment-strategies/
Selected Answer: B
B:
https://cloud.google.com/architecture/application-deployment-and-testing-strategies#canary_test_pattern
upvoted 1 times
I think A is correct because of the switching to green only happens after you perform all the tests on it. So you can also test traffic (to satisfy the
question) This is the point of blue-green deployment as far as I understood it.
B is not correct because the real traffic is switched partially to new version immediately and so it effects some users.
reference: https://digitalvarys.com/what-is-blue-gren-deployment/
upvoted 1 times
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 16/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
For me is B. But I cannot understand why all purchased exam test with this question, put Blue/Green as correct answer. It's so clear that Canary is
the rightest one 'cos forward only a few of users to new deploy (not every as blue/green) and also allow the rollback action
upvoted 1 times
Selected Answer: B
1. Reducing impact
2. Number of users affected
If you want meet both of the conditions, you need to choose Canary
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 17/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company wants to expand their users outside the United States for their popular application. The company wants to ensure 99.999%
availability of the database for their application and also wants to minimize the read latency for their users across the globe.
E. Create a minimum of two Cloud Spanner instances in separate regions with at least one node.
Correct Answer: BF
The more number of node less read latency hence i will go with option A and C
upvoted 9 times
As for second answer - I do not have strong opinion.. As per documentation "Adding nodes gives each replica more CPU and RAM, which increases
the replica's throughput" and they recommend to choose number of nodes to "keep high priority total CPU utilization under 65%". So nodes are
not about SLA and read latency. From another hand spanner "Cloud Spanner automatically replicates your data between regions with strong
consistency guarantees" so no DataFlow pipeline needed to replicate data, unless the app has other DBs and ETL between Spanner and that DBs.
upvoted 7 times
Selected Answer: AC
Selected Answer: AC
it's obvious
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 18/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: AC
3 regions at least, and for those that says there is no region "man-asia-eur1", take a look at the console, its multiregion nomenclature!
upvoted 1 times
We need multi-regin db (spanner) to satisfy 99,999% SLA and have multi-node to ensure resource needed.
upvoted 2 times
AC are correct
upvoted 3 times
https://cloud.google.com/spanner/docs/latency
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 19/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to migrate an internal file upload API with an enforced 500-MB file size limit to App Engine.
Correct Answer: C
Reference:
https://wiki.christophchamp.com/index.php?title=Google_Cloud_Platform
Selected Answer: D
By changing the API to support multipart file uploads, you can maintain the functionality of your existing API while adapting it to the App Engine
environment.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
C is the answer
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 20/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. The application exposes an HTTP-based health check at
/healthz. You want to use this health check endpoint to determine whether traffic should be routed to the pod by the load balancer.
A.
B.
C.
D.
Correct Answer: B
For the GKE ingress controller to use your readinessProbes as health checks, the Pods for an Ingress must exist at the time of Ingress creation.
If your replicas are scaled to 0, the default health check will apply.
The liveness probe, specified in option A, is used to determine whether the application is running and responsive. If the liveness probe fails, the
application is considered to be in a failed state and will be restarted.
The readiness probe, specified in option B, is used to determine when a Pod is ready to receive traffic. If the readiness probe fails, the Pod will not
receive traffic from the load balancer until it becomes healthy again.
So think of it this way. The pod passes its initial check (readiness), accepts traffic for a while then crashes (logically, the pod can still be "Running").
The Liveness probe is responsible for detecting it. Otherwise, the LB could still pass traffic to a pod that can't serve traffic.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 21/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
ingress backendconfig can set healthCheck, but this resource can not set httpGet
```
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
```
upvoted 1 times
B is correct
upvoted 3 times
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 22/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your teammate has asked you to review the code below. Its purpose is to efficiently add a large number of small rows to a BigQuery table.
C. Write each row to a Cloud Storage object, then load into BigQuery.
D. Write each row to a Cloud Storage object in parallel, then load into BigQuery.
Correct Answer: B
Selected Answer: B
Response should be A, because original code pushes one row at a time, which is more time consuming in contrast to batch processing.
Proposed answer C is incorrect, because we still have more overhead in sending each row in separate request than using batch processing.
upvoted 6 times
Selected Answer: B
B - I was between A and B. Both options require changes in the code and Option B requires changes in the way you are managing the Collection. If
you insert multiples rows at a time, you would still need to move through the ROWS in the collection one by one (remember, this is a loop) to then
insert in bulk. If you first break the Collection into (n) subsets and then run the function in (n) threats, you would be moving through (n) subsets at
a time, making (n) insertions at a time, all in parallel. That was my way of viewing it.
Option A would actually not even make a change in performance (sort of), you would just be interacting with the database less. (if interacting less
in faster then you would see a small decrease in insert latencies)
upvoted 1 times
Selected Answer: A
I would go with A.
upvoted 1 times
Selected Answer: A
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 23/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Batch inserts are more efficient than individual inserts and will increase write performance by reducing the overhead of creating and sending
individual requests for each row. Parallel inserts could potentially lead to conflicting writes or cause resource exhaustion, and adding a step of
writing to Cloud Storage and then loading into BigQuery can add additional overhead and complexity.
upvoted 1 times
Selected Answer: A
It is generally more efficient to insert multiple rows in a single request, rather than making a separate request for each row. This reduces the
overhead of making multiple HTTP requests, and can also improve performance by allowing BigQuery to perform more efficient batch operations.
You can use the InsertAllRequest.RowToInsert.of(row) method to add multiple rows to a single request
upvoted 1 times
This will insert the rows in batches of BATCH_SIZE, which you can adjust based on the desired balance between performance and resource
usage.
upvoted 1 times
Option C, writing each row to a Cloud Storage object before loading into BigQuery, would likely be less efficient than simply inserting the
rows directly into BigQuery. It would involve additional steps and potentially increase the overall time it takes to write the rows to the table.
upvoted 1 times
Selected Answer: A
vote A
upvoted 1 times
Original code inserts one row at a time so no point on using parallel requests..
upvoted 1 times
Parallel saving to the database can increase the total addition time and depends on many system conditions. While batch saving is optimized at the
database core level.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 24/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
https://cloud.google.com/bigquery/docs/samples/bigquery-table-insert-rows
upvoted 1 times
Selected Answer: A
This should be A.
upvoted 1 times
Batch insert
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 25/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service will exist within the same GKE
cluster. You want clients to be able to get the IP address of the service.
A. Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's cluster IP address.
B. Define a GKE Service. Clients should use the service name in the URL to connect to the service.
C. Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in the client container.
D. Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.
Correct Answer: C
It's B - Clients are in the cluster and therefore can use service dns names.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
"Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod's DNS search list includes the
Pod's own namespace and the cluster's default domain."
upvoted 19 times
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
https://www.exam-answer.com/gke-service-url-image-resizing-api
upvoted 1 times
Selected Answer: A
A.
GKE endpoint is external facing, Opt C and D are out. Also exposing to endpoint won't expose all containers in the GKE cluster - if one service
exposes to 4000 nodes with containers then does this mean the GKE would need to update 4000 times? This just doesn't make sense. Opt B use
service name, in other words, CNAME, so it still has to go through Cloud DNS. Hence the opt A shall be correct.
upvoted 1 times
It's B
upvoted 1 times
answer is B because client are in the same cluster so service name can be used.
upvoted 1 times
Selected Answer: C
Question reads "IP address" and I don't think that using B the IP can be obtained.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 26/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
https://cloud.google.com/kubernetes-engine/docs/concepts/service-discovery
"In Kubernetes, service discovery is implemented with automatically generated service names that map to the Service's IP address. Service names
follow a standard specification: as follows: my-svc.my-namespace.svc.cluster-domain.example. Pods can also access external services through their
names, such as example.com. "
upvoted 3 times
Selected Answer: B
B is correct
upvoted 3 times
B is correct
upvoted 1 times
Selected Answer: B
Should be B
upvoted 1 times
If client and server are in same namespace, C is collect.But in this case, no condition of namespace. So client pod must be use server service name.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 27/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are using Cloud Build to build and test application source code stored in Cloud Source Repositories. The build process requires a build tool
A. Download the binary from the internet during the build process.
B. Build a custom cloud builder image and reference the image in your build steps.
C. Include the binary in your Cloud Source Repositories repository and reference it in your build scripts.
D. Ask to have the binary added to the Cloud Build environment by filing a feature request against the Cloud Build public Issue Tracker.
Correct Answer: B
B is correct answer
https://cloud.google.com/cloud-build/docs/configuring-builds/use-community-and-custom-builders#creating_a_custom_builder
upvoted 9 times
Selected Answer: B
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 1 times
Selected Answer: B
B is correct
upvoted 2 times
Selected Answer: B
B is correct
upvoted 1 times
b https://cloud.google.com/cloud-build/docs/configuring-builds/use-community-and-custom-builders#creating_a_custom_builder
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 28/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 29/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are deploying your application to a Compute Engine virtual machine instance. Your application is configured to write its log files to disk. You
want to view the logs in Stackdriver Logging without changing the application code.
A. Install the Stackdriver Logging Agent and configure it to send the application logs.
B. Use a Stackdriver Logging Library to log directly from the application to Stackdriver Logging.
C. Provide the log file folder path in the metadata of the instance to configure it to send the application logs.
D. Change the application to log to /var/log so that its logs are automatically sent to Stackdriver Logging.
Correct Answer: A
Selected Answer: A
Correct answer is A
upvoted 1 times
Selected Answer: A
A is correct
upvoted 1 times
Selected Answer: A
A is correct
upvoted 3 times
Selected Answer: A
https://cloud.google.com/logging/docs/agent/logging/installation
upvoted 2 times
A is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 30/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your service adds text to images that it reads from Cloud Storage. During busy times of the year, requests to Cloud Storage fail with an HTTP 429
"Too Many
Correct Answer: C
Reference:
https://developers.google.com/gmail/api/v1/reference/quota
Selected Answer: C
Selected Answer: C
C is Correct
upvoted 1 times
An HTTP 429 "Too Many Requests" status code indicates that the server is receiving too many requests and is unable to handle them all. In this
situation, it is generally best to retry the request after a period of time, using a truncated exponential backoff strategy. This involves retrying the
request with increasingly longer delays between each retry, up to a maximum delay. The delays can be generated using an exponential backoff
formula, which increases the delay by a power of two on each retry. The retries can be truncated at a maximum delay to prevent the retries from
taking too long.
upvoted 1 times
C is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 31/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
C is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 32/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are building an API that will be used by Android and iOS apps. The API must:
* Support HTTPs
A. RESTful APIs
C. gRPC-based APIs
D. SOAP-based APIs
Correct Answer: A
Reference:
https://www.devteam.space/blog/how-to-build-restful-api-for-your-mobile-app/
Selected Answer: A
A, according to https://www.exam-answer.com/api-architecture-for-android-ios-apps
upvoted 1 times
Selected Answer: C
Selected Answer: C
Answer is C: gRPC is a high-performance, open-source universal RPC framework which supports IOS and Android as well.
upvoted 1 times
Selected Answer: A
https://www.exam-answer.com/api-architecture-for-android-ios-
apps#:~:text=The%20most%20suitable%20API%20architecture,used%20for%20building%20web%20APIs.
upvoted 1 times
gRPC-based APIs
upvoted 1 times
Selected Answer: C
https://www.imaginarycloud.com/blog/grpc-vs-rest/
gRPC architectural style has promising features that can (and should) be explored. It is an excellent option for working with multi-language
systems, real-time streaming, and for instance, when operating an IoT system that requires light-weight message transmission such as the
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 33/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
serialized Protobuf messages allow. Moreover, gRPC should also be considered for mobile applications since they do not need a browser and can
benefit from smaller messages, preserving mobiles' processors' speed.
upvoted 2 times
Selected Answer: C
The correct is C
upvoted 1 times
gRPC (gRPC Remote Procedure Calls) is a modern, high-performance, open-source remote procedure call (RPC) framework that can be used to
build APIs. It uses HTTP/2 as the underlying transport protocol and Protocol Buffers as the encoding format. gRPC is designed to be low-
bandwidth, low-latency, and easily integrable with mobile apps. It also supports HTTPs out of the box.
RESTful APIs (A) are a popular choice for building APIs, but they may not be as efficient as gRPC in terms of bandwidth usage, especially for APIs
that transfer large amounts of data. MQTT (B) is a lightweight messaging protocol that is often used in IoT applications, but it may not be as well-
suited for building APIs as gRPC. SOAP-based APIs (D) are an older style of API that has largely been replaced by more modern alternatives like
gRPC.
upvoted 3 times
https://cloud.google.com/blog/products/api-management/understanding-grpc-openapi-and-rest-and-when-to-use-them
upvoted 2 times
A is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 34/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a table in Cloud Spanner. Your
How should you perform reads from Cloud Spanner for this application?
Correct Answer: D
Reference:
https://cloud.google.com/solutions/best-practices-cloud-spanner-gaming-database
Also if your application is latency sensitive but tolerant of stale data, then stale reads can provide performance benefits.
source: https://cloud.google.com/spanner/docs/reads
upvoted 6 times
https://cloud.google.com/spanner/docs/reads:
"A stale read is read at a timestamp in the past. If your application is latency sensitive but tolerant of stale data, then stale reads can provide
performance benefits."
B is the answer since the qn is asking about reading from Cloud Spanner; no writes involved
upvoted 5 times
Selected Answer: B
It Should be B.
Since the application is more sensitive to latency and less sensitive to consistency performing stale reads is the best choice here. It will provide low
latency at the cost of potentially returning stale data.
upvoted 1 times
Selected Answer: B
Should be B
upvoted 1 times
Selected Answer: B
B is correct
upvoted 1 times
B is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 35/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 36/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is deployed in a Google Kubernetes Engine (GKE) cluster. When a new version of your application is released, your CI/CD tool
updates the spec.template.spec.containers[0].image value to reference the Docker image of your new application version. When the Deployment
object applies the change, you want to deploy at least 1 replica of the new version and maintain the previous replicas until the new replica is
healthy.
Which change should you make to the GKE Deployment object shown below?
A. Set the Deployment strategy to RollingUpdate with maxSurge set to 0, maxUnavailable set to 1.
B. Set the Deployment strategy to RollingUpdate with maxSurge set to 1, maxUnavailable set to 0.
C. Set the Deployment strategy to Recreate with maxSurge set to 0, maxUnavailable set to 1.
D. Set the Deployment strategy to Recreate with maxSurge set to 1, maxUnavailable set to 0.
Correct Answer: D
RollingUpdate: New pods are added gradually, and old pods are terminated gradually
Recreate: All old pods are terminated before any new pods are added
Question ask us to retain current version hence rolling update is better option here.
upvoted 17 times
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades:
"The simplest way to take advantage of surge upgrade is to configure maxSurge=1 maxUnavailable=0. This means that only 1 surge node can be
added to the node pool during an upgrade so only 1 node will be upgraded at a time. This setting is superior to the existing upgrade configuration
(maxSurge=0 maxUnavailable=1) because it speeds up Pod restarts during upgrades while progressing conservatively."
Answer is B
upvoted 7 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 37/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
The RollingUpdate Deployment strategy allows you to specify the number of replicas that can be created or removed at a time as part of the
update process. The maxSurge parameter specifies the maximum number of replicas that can be created in excess of the desired number of
replicas, and the maxUnavailable parameter specifies the maximum number of replicas that can be unavailable at any given time.
By setting maxSurge to 1 and maxUnavailable to 0, you are telling the Deployment to create at least 1 new replica of the new version and to
maintain all of the previous replicas until the new replica is healthy. This will ensure that at least 1 replica of the new version is always available,
while allowing the Deployment to gradually roll out the update to the rest of the replicas.
upvoted 2 times
Selected Answer: B
B is obvious
upvoted 1 times
B is correct
upvoted 1 times
B is correct
upvoted 2 times
Selected Answer: B
Answer is B
upvoted 1 times
Selected Answer: B
Answer is B
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 38/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You plan to make a simple HTML application available on the internet. This site keeps information about FAQs for your application. The
application is static and contains images, HTML, CSS, and Javascript. You want to make this application available on the internet with as few
steps as possible.
C. Create a Compute Engine instance with Apache web server installed. Configure Apache web server to host the application.
D. Containerize your application first. Deploy this container to Google Kubernetes Engine (GKE) and assign an external IP address to the GKE
Correct Answer: A
Reference:
https://cloud.google.com/storage/docs/hosting-static-website
Selected Answer: A
Cloud Storage is the correct answer because it is the simplest way to host a static website containing images, HTML, CSS, and JavaScript. Simply
upload the static files to Cloud Storage, and they can be served on the internet with minimal configuration. Cloud Storage provides high availability
and reliability, ensuring a fast and secure user experience.
Source: https://examlab.co/google/google-cloud-professional-cloud-developer
upvoted 1 times
Correct Answer: A
upvoted 1 times
A is correct
upvoted 3 times
Selected Answer: A
A is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 39/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company has deployed a new API to App Engine Standard environment. During testing, the API is not behaving as expected. You want to
monitor the application over time to diagnose the problem within the application code without redeploying the application.
A. Stackdriver Trace
B. Stackdriver Monitoring
Correct Answer: B
Reference:
https://rominirani.com/gcp-stackdriver-tutorial-debug-snapshots-traces-logging-and-logpoints-1ba49e4780e6
Option C: Stackdriver Debug Snapshots allow you to inspect the state of an application at any code location in production, without stopping or
slowing down your applications.
upvoted 1 times
Selected Answer: B
Selected Answer: D
To use Stackdriver Debug Snapshots to monitor your application, you would need to take periodic snapshots of your application and then analyze
the snapshot data to identify any issues or problems. However, this would not be a real-time monitoring solution, and it would not allow you to
continuously monitor your application for issues. Instead, it would be a way to investigate issues after they have occurred, by examining the state
of the application at the time the snapshot was taken.
upvoted 1 times
To use Stackdriver Debug Logpoints to monitor your application, you would need to insert logpoints into your code at strategic points, and then
analyze the log output to identify any issues or problems. However, this would not be a real-time monitoring solution, and it would not allow
you to continuously monitor your application for issues. Instead, it would be a way to investigate issues after they have occurred, by examining
the log output that was generated.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 40/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
To monitor the application over time to diagnose a problem within the application code without redeploying the application, you should use
Stackdriver Monitoring (B). Stackdriver Monitoring provides a range of tools that allow you to view and analyze performance metrics, traces,
and logs for your application. This can help you identify and troubleshoot issues with your application.
upvoted 1 times
i think this question will become obsolete since Cloud debugger will be deprecated: Cloud Debugger is deprecated and will be shutdown May 31,
2023. See the deprecations page and release notes for more information.
Cloud Debugger is deprecated and is scheduled for shutdown on May 31 2023. For an alternative, use the open source CLI tool, Snapshot
Debugger.
https://cloud.google.com/debugger/docs/release-notes
Selected Answer: D
" You want to monitor the application over time to diagnose the problem within the application code"
If it's only for moniroting it's B, but it mentions "within the code" so it should be D
upvoted 1 times
Selected Answer: B
Selected Answer: D
https://cloud.google.com/debugger/docs/using/logpoints
upvoted 1 times
D is correct
upvoted 3 times
https://cloud.google.com/debugger/docs/using/logpoints
upvoted 1 times
Community choice is D
upvoted 1 times
Answer is D
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 41/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 42/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You want to use the Stackdriver Logging Agent to send an application's log file to Stackdriver from a Compute Engine virtual machine instance.
After installing the Stackdriver Logging Agent, what should you do first?
D. Create a Stackdriver Logs Export Sink with a filter that matches the application's log entries.
Correct Answer: B
Selected Answer: C
it's C
upvoted 1 times
Selected Answer: C
We need to configure the log source in StackDriver ager to read the logs.
upvoted 1 times
Selected Answer: C
After installing StackDriver agent, you need to configure the new source from which to read the logs to be sent
upvoted 1 times
first C then D
upvoted 1 times
I vote for B.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 3 times
Selected Answer: C
https://cloud.google.com/logging/docs/agent/configuration
upvoted 1 times
Answer is C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 43/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 44/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company has a BigQuery data mart that provides analytics information to hundreds of employees. One user of wants to run jobs without
interrupting important workloads. This user isn't concerned about the time it takes to run these jobs. You want to fulfill this request while
minimizing cost to the company and the effort required on your part.
D. Allow the user to run jobs when important workloads are not running.
Correct Answer: B
B is wrong since it will incur more costs which is not what the qn wants
C is definitely out as creating roles is not what the qn is asking for
D is wrong as it would not minimise effort
upvoted 10 times
Selected Answer: A
Batch jobs in BigQuery are not subject to the usual quota limits and do not count towards your concurrent rate limit, which makes them suitable
for running large queries and reducing costs. They are executed when system resources become available, so there might be a delay, but since the
user isn’t concerned about the time it takes to run these jobs, this would be a suitable solution
upvoted 1 times
Correct Answer:A
upvoted 1 times
Selected Answer: A
A is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 45/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
A is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 46/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You want to notify on-call engineers about a service degradation in production while minimizing development time.
Correct Answer: A
I don't think the correct answer is A) Cloud Functions are not about monitoring at all, but I have found one mention of using cloud functions for
monitoring: https://cloud.google.com/solutions/serverless-web-performance-monitoring-using-cloud-functions . But the mentioned article is
about WEB page performance and it does require a lot of efforts. The question does not have info about the kind of service to monitor, so I think
the answer should be D) - "Use Stackdriver Monitoring to monitor resources and raise alerts"
upvoted 11 times
Selected Answer: D
D
Error Reporting is not about service degradation, more, Error Reporting uses Monitoring to send alerts.
https://cloud.google.com/error-reporting/docs/notifications
upvoted 1 times
Selected Answer: D
D is correct
upvoted 1 times
Selected Answer: D
D is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 47/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 48/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are writing a single-page web application with a user-interface that communicates with a third-party API for content using XMLHttpRequest.
The data displayed on the UI by the API results is less critical than other data displayed on the same web page, so it is acceptable for some
requests to not have the API data displayed in the UI. However, calls made to the API should not delay rendering of other parts of the user
interface. You want your application to perform well when the API response is an error or a timeout.
A. Set the asynchronous option for your requests to the API to false and omit the widget displaying the API results when a timeout or error is
encountered.
B. Set the asynchronous option for your request to the API to true and omit the widget displaying the API results when a timeout or error is
encountered.
C. Catch timeout or error exceptions from the API call and keep trying with exponential backoff until the API response is successful.
D. Catch timeout or error exceptions from the API call and display the error response in the UI widget.
Correct Answer: A
Answer is B.
Api should not delay rendering: asynchronous
Application perform well when Api error or timeout: omit the widget
upvoted 11 times
Correct answer is B
Asynchronous handling provides the ability to call the API in the background without blocking the rendering of other elements. If the response is
received it can be rendered or omitted if a timeout occurs.
upvoted 6 times
Selected Answer: B
Correct Answer B.
upvoted 1 times
Setting the asynchronous option to true means that the requests will not block the main thread and will be executed in the background.
Furthermore, so as written in the description the widget can be omitted
upvoted 1 times
B is the correct answer because setting the asynchronous option for the API request to true allows the user interface to continue rendering while
the API request is being processed, which improves the performance of the application. Omitting the widget displaying the API results when a
timeout or error is encountered allows the application to continue functioning without waiting for a successful API response.
upvoted 1 times
B is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 49/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 50/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are creating a web application that runs in a Compute Engine instance and writes a file to any user's Google Drive. You need to configure the
application to authenticate to the Google Drive API. What should you do?
A. Use an OAuth Client ID that uses the https://www.googleapis.com/auth/drive.file scope to obtain an access token for each user.
C. Use the App Engine service account and https://www.googleapis.com/auth/drive.file scope to generate a signed JSON Web Token (JWT).
D. Use the App Engine service account with delegated domain-wide authority.
Correct Answer: B
Reference:
https://developers.google.com/drive/api/v3/about-auth
Selected Answer: C
correct answer is C. In the link you proposed about access view is clearly stated that you can prevent access to the underlying dataset and give
access only to data that is in the view after applying the query.
upvoted 1 times
A is Correct.
upvoted 1 times
Selected Answer: A
A is correct
upvoted 2 times
Selected Answer: A
A is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 51/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
Selected Answer: A
A is correct because each user should have its own access token rather giving delegated wide domain access.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 52/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are creating a Google Kubernetes Engine (GKE) cluster and run this command:
D. Decouple services in the cluster, and rewrite new clusters to function with fewer cores.
Correct Answer: A
Selected Answer: B
Correct answer B: you should request additional Compute Engine quota in the GCP Console.
upvoted 1 times
Selected Answer: B
Selected Answer: B
Selected Answer: B
The GKE node are Compute Engine instances, so if you need more CPUs you need to ask more quota of these.
Selected Answer: B
B is correct
upvoted 3 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 53/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
Error message mentions "CPU" so this would refer to Compute Engine VMs
Answer is B
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 54/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a transaction amount (a number). You want
to calculate the sum of all transaction amounts for each unique account number efficiently.
A. A linked list
B. A hash table
C. A two-dimensional array
D. A comma-delimited string
Correct Answer: B
Selected Answer: B
Selected Answer: B
Selected Answer: C
With hash tables, you cannot store multiple values (amounts) in single key (account number), but this is exactly what you need to do.
Two dimensional array can be used to store all the couples Account-amount (timestamp is useless).
So my selected answer is C.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 3 times
Selected Answer: B
Hash table with the account number as the key, the timestamp is useless for this question, so we can safely discard it.
upvoted 3 times
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 56/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company has a BigQuery dataset named "Master" that keeps information about employee travel and expenses. This information is organized
by employee department. That means employees should only be able to view information for their department. You want to apply a security
A. Create a separate dataset for each department. Create a view with an appropriate WHERE clause to select records from a particular dataset
for the specific department. Authorize this view to access records from your Master dataset. Give employees the permission to this
department-specific dataset.
B. Create a separate dataset for each department. Create a data pipeline for each department to copy appropriate information from the
Master dataset to the specific dataset for the department. Give employees the permission to this department-specific dataset.
C. Create a dataset named Master dataset. Create a separate view for each department in the Master dataset. Give employees access to the
D. Create a dataset named Master dataset. Create a separate table for each department in the Master dataset. Give employees access to the
Correct Answer: B
Selected Answer: C
correct answer is C. In the link you proposed about access view is clearly stated that you can prevent access to the underlying dataset and give
access only to data that is in the view after applying the query.
upvoted 1 times
Selected Answer: C
This approach allows you to maintain a single “master” dataset, while using views to control access to data based on department. This minimizes
the number of steps required, as you don’t need to create separate datasets or data pipelines for each department.
upvoted 2 times
Selected Answer: C
correct answer c. the view answer the need of acess A is elminited because create dataset by department is more steps.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 57/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
C is correct
upvoted 2 times
Selected Answer: C
It should be C.
Why create a separate dataset when you can get it done simply by creating a view in the same dataset.
upvoted 1 times
Selected Answer: A
A is more suitable
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 58/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application in production. It is deployed on Compute Engine virtual machine instances controlled by a managed instance group.
Traffic is routed to the instances via a HTTP(s) load balancer. Your users are unable to access your application. You want to implement a
A. Smoke tests
Correct Answer: B
Reference:
https://medium.com/google-cloud/stackdriver-monitoring-automation-part-3-uptime-checks-476b8507f59c
C,D can both check but not 'alert', so I think the answer is B
upvoted 5 times
Selected Answer: B
Stackdriver Uptime Check is the correct option as we can configure it to send an alert when the service is down.
upvoted 1 times
B is correct
upvoted 3 times
"If a backend becomes unhealthy, traffic is automatically redirected to healthy backends within the same region. If all backends are unhealthy, the
load balancer returns an HTTP 503 Service Unavailable response."
"One or more backends must be connected to the backend service. Because the scope of an internal HTTP(S) load balancer is regional, not global,
clients and backend VMs or endpoints must all be in the same region. Backends can be instance groups or NEGs in any of the following
configurations:
Managed instance groups (zonal or regional)"
I would take C as the answer since the application runs on the MIG and traffic is being controlled by the load balancer
upvoted 2 times
Correct answer is B as Stackdriver or Cloud Monitoring uptime checks can be used to check if the application is unavailable.
https://cloud.google.com/monitoring/uptime-checks:
"An uptime check is a request sent to a resource to see if it responds. You can use uptime checks to determine the availability of a VM instance,
an App Engine service, a URL, or an AWS load balancer."
upvoted 5 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 59/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 60/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are load testing your server application. During the first 30 seconds, you observe that a previously inactive Cloud Storage bucket is now
servicing 2000 write requests per second and 7500 read requests per second. Your application is now receiving intermittent 5xx and 429 HTTP
JSON API as the demand escalates. You want to decrease the failed responses from the Cloud Storage API.
B. Use the XML API instead of the JSON API for interfacing with Cloud Storage.
C. Pass the HTTP response codes back to clients that are invoking the uploads from your application.
D. Limit the upload rate from your application clients so that the dormant bucket's peak request rate is reached more gradually.
Correct Answer: A
Reference:
https://cloud.google.com/storage/docs/request-rate
Answer is D
https://cloud.google.com/storage/docs/request-rate#ramp-up
upvoted 12 times
Selected Answer: D
Google Cloud Storage buckets have an initial limit on the request rate. If a bucket is inactive for a period of time, it will have a lower limit. As traffic
increases, Google Cloud Storage dynamically increases the request rate limit. This process can take several minutes. If the traffic increases too
quickly, you may see intermittent 5xx and 429 HTTP responses. By limiting the upload rate from your application clients, you allow the bucket’s
peak request rate to increase more gradually, reducing the chance of receiving these errors.
upvoted 1 times
A, distributing the uploads across a large number of individual storage buckets, may not necessarily decrease the failed responses and could
potentially increase the complexity of the system.
B, using the XML API instead of the JSON API, may not necessarily improve performance and could require significant changes to the application.
C, passing the HTTP response codes back to clients, may not address the root cause of the issue and could lead to further errors.
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
Selected Answer: D
D is correct
upvoted 3 times
Selected Answer: D
Vote for D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 61/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: D
Selected Answer: D
Answer is D
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 62/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is controlled by a managed instance group. You want to share a large read-only data set between all the instances in the
managed instance group. You want to ensure that each instance can start quickly and can access the data set via its filesystem with very low
latency. You also want to minimize the total cost of the solution.
A. Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE.
B. Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup script.
C. Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple Compute Engine virtual machine
instances.
D. Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot, and attach each disk to its
own instance.
Correct Answer: C
C:
https://cloud.google.com/compute/docs/disks/sharing-disks-between-vms#use-multi-instances
Share a disk in read-only mode between multiple VMs
Sharing static data between multiple VMs from one persistent disk is "less expensive" than replicating your data to unique disks for individual
instances.
https://cloud.google.com/compute/docs/disks/gcs-buckets#mount_bucket
Mounting a bucket as a file system
You can use the Cloud Storage FUSE tool to mount a Cloud Storage bucket to your Compute Engine instance. The mounted bucket behaves
similarly to a persistent disk even though Cloud Storage buckets are object storage.
https://github.com/GoogleCloudPlatform/gcsfuse/
Cloud Storage FUSE performance issues: Latency, Rate limit
upvoted 7 times
Selected Answer: C
Option C is Correct.
upvoted 1 times
Selected Answer: C
Selected Answer: C
C is correct
upvoted 2 times
A & C are correct answers, however the question states for low latency hence C is the correct one..
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 63/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
I would take B
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 64/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by multiple clients within the
same Virtual
Private Cloud (VPC). You want clients to be able to get the IP address of the service.
A. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Clients should use this IP address
B. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Then, define an A record in Cloud
DNS. Clients should use the name of the A record to connect to the service.
C. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c.
[PROJECT_ID].internal/.
D. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.
Correct Answer: D
My vote is answer C)
"Virtual Private Cloud networks on Google Cloud have an internal DNS service that lets instances in the same network access each other by using
internal DNS names"
This name can be used for access: [INSTANCE_NAME].[ZONE].c.[PROJECT_ID].internal
https://cloud.google.com/compute/docs/internal-dns#access_by_internal_DNS
upvoted 20 times
Selected Answer: C
This is a simple and effective way to enable communication between services within the same VPC without the need for external IP addresses or
load balancing services.
upvoted 1 times
C is the correct answer: By connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/, clients can use
Compute Engine's internal DNS to access the API hosted on the virtual machine instance within the same VPC. This will allow clients to access the
API with low latency and without the need for a static external IP address.
upvoted 1 times
C is correct
upvoted 1 times
https://cloud.google.com/compute/docs/internal-dns
The answer ic C.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 3 times
vote for C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 65/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: C
With no doubt, it is C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 66/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints.
C. Export the logs to Cloud Storage and count lines matching /api/alpha.
D. Export the logs to Cloud Pub/Sub and count lines matching /api/alpha.
Correct Answer: C
Ans: B
B have the correct endpoint /api/alpha/*,
A only get one endpoint counter
upvoted 17 times
Answer should be A
upvoted 9 times
Selected Answer: B
Ans: B
upvoted 1 times
This option will accurately track the number of requests made to all endpoints nested under /api/alpha/*.
upvoted 1 times
Selected Answer: B
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 67/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: B
Selected Answer: B
answer is B with the correct endpoint and the goal of counter metris is to resolve the need to count calls
upvoted 1 times
C is correct
upvoted 2 times
https://cloud.google.com/monitoring/charts/metrics-selector#filter-option
To match any US zone that ends with “a”, you could use the the regular expression ^us.*.a$.
upvoted 3 times
I would take C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 68/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You want to re-architect a monolithic application so that it follows a microservices model. You want to accomplish this efficiently while
C. Refactor the monolithic application with appropriate microservices in a single effort and deploy it.
D. Build a new application with the appropriate microservices separate from the monolith and replace it when it is complete.
Correct Answer: C
Reference:
https://cloud.google.com/solutions/migrating-a-monolithic-app-to-microservices-gke
Selected Answer: B
https://cloud.google.com/architecture/microservices-architecture-refactoring-monoliths
upvoted 1 times
B is correct
upvoted 3 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 69/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 3 times
Selected Answer: B
B is the correct. You don't want to replace the monolithic application in one go as C suggests which kind of defeats the purpose.
upvoted 2 times
A is completely wrong
C and D can be done but the amount of risk involved can be too great
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 70/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your existing application keeps user state information in a single MySQL database. This state information is very user-specific and depends
heavily on how long a user has been using an application. The MySQL database is causing challenges to maintain and enhance the schema for
various users.
A. Cloud SQL
B. Cloud Storage
C. Cloud Spanner
D. Cloud Datastore/Firestore
Correct Answer: A
Reference:
https://cloud.google.com/solutions/migrating-mysql-to-cloudsql-concept
Question sais that there are challenges to maintain and enhance schema, so schemaless DB is more preferable, moreover google mention that
Datastore/Firestore is good for users profiles (https://cloud.google.com/datastore/docs/concepts/overview#what_its_good_for)
Answer: D)
upvoted 19 times
Product catalogs that provide real-time inventory and product details for a retailer.
User profiles that deliver a customized experience based on the user’s past activities and preferences.
Transactions based on ACID properties, for example, transferring funds from one bank account to another."
upvoted 1 times
Selected Answer: D
Selected Answer: D
The question is a bit misleading. If its asking to keep a MySQL storage option then Cloud SQL or Spanner are the only options. However, assuming
that they want to move away from schema and also the need for stateful DB I would go for Datastore/Firestore.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 71/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
Selected Answer: D
D is correct
upvoted 3 times
D is the answer
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 72/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are building a new API. You want to minimize the cost of storing and reduce the latency of serving images.
Correct Answer: B
Answer D) seems more suitable as Cloud Storage has low cost and CDN provides low serving latency
upvoted 16 times
Selected Answer: D
A Cloud CDN is a content delivery network that uses Google's globally distributed edge points of presence to accelerate content delivery for
websites and applications served out of Google Cloud. Cloud CDN stores and serves content from Google Cloud Storage, which allows for efficient
and low-cost storage of images, as well as low latency in serving the images. The other options do not mention low latency or cost-effective
storage as their primary benefits.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 73/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
D is correct
upvoted 3 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 74/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company's development teams want to use Cloud Build in their projects to build and push Docker images to Container Registry. The
operations team requires all Docker images to be published to a centralized, securely managed Docker registry that the operations team manages.
A. Use Container Registry to create a registry in each development team's project. Configure the Cloud Build build to push the Docker image to
the project's registry. Grant the operations team access to each development team's registry.
B. Create a separate project for the operations team that has Container Registry configured. Assign appropriate permissions to the Cloud
Build service account in each developer team's project to allow access to the operation team's registry.
C. Create a separate project for the operations team that has Container Registry configured. Create a Service Account for each development
team and assign the appropriate permissions to allow it access to the operations team's registry. Store the service account key file in the
source code repository and use it to authenticate against the operations team's registry.
D. Create a separate project for the operations team that has the open source Docker Registry deployed on a Compute Engine virtual machine
instance. Create a username and password for each development team. Store the username and password in the source code repository and
Correct Answer: A
Reference:
https://cloud.google.com/container-registry/
Selected Answer: B
I would go with B.
upvoted 1 times
The option B is the best solution because it provides a centralized, securely managed Docker registry for all development teams to use, and the
Cloud Build service account can be granted the necessary permissions to push Docker images to the registry.
upvoted 1 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 75/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
C is not the best choice because it requires storing the service account key file in the source code repository, which may not be secure.
upvoted 1 times
B is correct
upvoted 2 times
Selected Answer: B
B is the answer
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 76/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application can scale horizontally, and each
instance of your application needs to have a stable network identity and its own persistent disk.
A. Deployment
B. StatefulSet
C. ReplicaSet
D. ReplicaController
Correct Answer: B
Reference:
https://livebook.manning.com/book/kubernetes-in-action/chapter-10/46
Selected Answer: B
A StatefulSet is the Kubernetes object best suited for workloads where each pod needs a stable network identity and its own persistent disk.
upvoted 1 times
StatefulSets are used to manage the deployment and scaling of stateful applications. They provide a stable network identity and persistent storage
for each instance of the application. They are designed to work with applications that require a stable network identity and persistent storage, such
as databases, message brokers, and other stateful applications. In contrast, Deployments are used to manage the deployment and scaling of
stateless applications, which do not require a stable network identity or persistent storage. ReplicaSets and ReplicaControllers are similar to
Deployments, but are older and less commonly used in GKE.
upvoted 1 times
B is correct
upvoted 3 times
Once created, the StatefulSet ensures that the desired number of Pods are running and available at all times. The StatefulSet automatically replaces
Pods that fail or are evicted from their nodes, and automatically associates new Pods with the storage resources, resource requests and limits, and
other configurations defined in the StatefulSet's Pod specification
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 77/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are using Cloud Build to build a Docker image. You need to modify the build to execute unit and run integration tests. When there is a failure,
you want the build history to clearly display the stage at which the build failed.
A. Add RUN commands in the Dockerfile to execute unit and integration tests.
B. Create a Cloud Build build config file with a single build step to compile unit and integration tests.
C. Create a Cloud Build build config file that will spawn a separate cloud build pipeline for unit and integration tests.
D. Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and integration tests.
Correct Answer: D
Selected Answer: D
I would go with D.
upvoted 1 times
D is correct
upvoted 3 times
Selected Answer: D
Vote D
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 78/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage bucket owned by project B. However, the
A. Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.
B. Grant your user account the roles/iam.serviceAccountUser role for the [email protected]
service account.
C. Grant the [email protected] service account the roles/storage.objectCreator role for the Cloud
Storage bucket.
Correct Answer: B
The answer is C : the default service account use by cloud function is [email protected] (cf.
https://cloud.google.com/functions/docs/concepts/iam#troubleshooting_permission_errors)
upvoted 17 times
Selected Answer: C
Correct : C
upvoted 1 times
In order for the Cloud Functions code running in project A to write to a Cloud Storage bucket in project B, the service account that is used to
execute the code needs to be granted the appropriate permissions. In this case, you should grant the service-PROJECTA@gcf-admin-
robot.iam.gserviceaccount.com service account the roles/storage.objectCreator role for the Cloud Storage bucket in project B. This will allow the
code to write objects to the bucket. Option A would not work because it is the service account, not your user account, that needs to be granted
permissions.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 3 times
Answer is C
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 79/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/functions/docs/troubleshooting:
"The Cloud Functions service uses the Cloud Functions Service Agent service account (service-<PROJECT_NUMBER>@gcf-admin-
robot.iam.gserviceaccount.com) when performing administrative actions on your project. By default this account is assigned the Cloud Functions
cloudfunctions.serviceAgent role. This role is required for Cloud Pub/Sub, IAM, Cloud Storage and Firebase integrations. If you have changed the
role for this service account, deployment fails."
Answer is C
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 80/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 81/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
D. Use a dedicated Compute Engine virtual machine instance for the service.
Correct Answer: D
Reference:
https://www.qwiklabs.com/focuses/611?parent=catalog
A is correct answer here App engine, as app engine flexible support .net
upvoted 11 times
Selected Answer: A
A, since AppEngine can be deployed in several regions with a Load Balancer in front as demonstrated by Google. This will make the deployment
serverless as requested, sticking to .net framework.
upvoted 1 times
Selected Answer: B
B is correct.
upvoted 2 times
Selected Answer: C
upvoted 1 times
Selected Answer: A
C is correct because app engine is regional only so it not answer the need of global application
upvoted 1 times
Selected Answer: A
A is correct
upvoted 1 times
Selected Answer: A
Selected Answer: A
A is correct answer here App engine, as app engine flexible support .net
upvoted 1 times
Selected Answer: C
Initially, I had picked option A considering the flexible request support. However, it imposes the limitation of a single region which is not expected
in the case study. So, the only suitable option here is to use Compute Engine Cluster.
upvoted 2 times
A is the answer
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 83/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 84/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
HipLocal's APIs are having occasional application failures. They want to collect application information specifically to troubleshoot the issue.
Correct Answer: C
Selected Answer: B
I would go with B.
upvoted 1 times
B.
A: No. This is for VM boot failed or other not related to API. The question didn't mention VM failed, nor the scenario.
B: Yes.
C: No. Monitoring agent is for metrics collection, such as memory. Not related to API.
D: No. If the question stated something like the API works perfectly but slow, then this would be valid.
upvoted 1 times
They don't have logging so need to add logging agent so we can have logs to study. Tracr is for latency issue and it's not the issue here.
upvoted 1 times
D is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 85/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 86/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 87/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.
Correct Answer: A
Selected Answer: B
For simplicity and ease of management, an auto mode subnet (option B) could be a good choice.
upvoted 1 times
Selected Answer: A
A is correct
upvoted 1 times
Selected Answer: A
A - Need to take control of the IP assignment thru manual subnet especially when establishing the connectivity between on-prem/cloud
upvoted 4 times
I will go with auto mode subnet creation as it will automatically create a subnet inside each region. Moreover, one of the business requirements
states that 'Reduce infrastructure management time and cost.'. Thus, with auto mode subnet we avoid infrastructure management.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 88/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 89/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
A. Cloud VPN
B. Cloud Armor
Correct Answer: D
Reference:
https://cloud.google.com/iap/docs/cloud-iap-for-on-prem-apps-overview
I think it should be D .
upvoted 7 times
Selected Answer: D
Cloud IAP works by verifying user identity and context of the request to determine if a user should be allowed to access the application. It provides
secure application-level access control and does not require a traditional VPN connection.
upvoted 1 times
D is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 90/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 91/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.
Correct Answer: BC
Selected Answer: AB
I Think It should be A nd B for serverless autoscaling. Since there are extra steps involved in configuring Knative it is not fit for this situation.
upvoted 1 times
Selected Answer: AB
Selected Answer: CD
CD are correct
upvoted 1 times
Selected Answer: CD
https://cloud.google.com/kubernetes-engine/docs/concepts/traffic-management
upvoted 1 times
Selected Answer: CD
Selected Answer: CD
App Engine cannot be a solution here as it limits the application to be in a single region. We need to note that the case study has explicitly
mentioned that the application needs to be global, which means multi-regional.
So, I will go with C & D.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 92/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
C+D are correct. Because k8s is global plus on-premises nodes can be connected.
upvoted 1 times
https://cloud.google.com/knative
https://cloud.google.com/blog/products/serverless/knative-based-cloud-run-services-are-ga
https://cloud.google.com/run/docs/multiple-regions
upvoted 1 times
Selected Answer: AB
https://cloud.google.com/appengine/docs/standard/python/how-instances-are-managed#apps_with_automatic_scaling
upvoted 1 times
A and B for sure; "eliminate manual scaling" as per what the qn states
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 93/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 94/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
In order to meet their business requirements, how should HipLocal store their application state?
Correct Answer: B
For me the answer is C. A is not valid because local SSD is volatile memory. B and D is bad solution because it don't reduce latency in world wide
but they are a regional location.
upvoted 12 times
Selected Answer: C
Moving the state storage to Cloud Spanner, is the best solution for HipLocal because Cloud Spanner is a globally distributed, horizontally scalable,
and strongly consistent relational database service.
Cloud Spanner provides high availability and can handle automatic scaling, backups, and updates. In addition, it provides support for distributed
transactions, and it's designed to provide low latency and high throughput.
Cloud Spanner can also help HipLocal meet their compliance requirements, as it supports HIPAA, GDPR, and other regulatory standards.
upvoted 1 times
Selected Answer: C
Given HipLocal’s requirements for global expansion and high availability, moving the state storage to Cloud Spanner (option C) would likely be the
most suitable choice.
upvoted 1 times
Selected Answer: D
https://stackoverflow.com/questions/60412688/whats-the-difference-between-google-cloud-spanner-and-cloud-
sql#:~:text=The%20main%20difference%20between%20Cloud,of%20writes%20per%20second%2C%20globally.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 95/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
D is correct
upvoted 1 times
vote for D as also cloud SQL could be built for multi-region replica
https://cloud.google.com/blog/products/databases/introducing-cross-region-replica-for-cloud-sql
upvoted 2 times
Selected Answer: C
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 96/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 97/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
A. Cloud Armor
B. Cloud Functions
C. Cloud Endpoints
Correct Answer: D
Selected Answer: C
Selected Answer: C
Google Cloud Endpoints is a distributed API management system that provides an API console, hosting, logging, monitoring, and other features to
help you create, deploy, and manage APIs on a large scale.
upvoted 1 times
C is correct
upvoted 1 times
Selected Answer: C
vote for C
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 98/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 99/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements.
A. Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on Compute Engine.
B. Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an external master configuration.
C. Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.
D. Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy without further configuration.
Correct Answer: B
C is correct answer
upvoted 10 times
D is the correct answer a cloud sql provide redundancy without further configuration.
upvoted 1 times
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
Correct Answer is C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 100/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/sql/docs/mysql/high-availability
Answer is C
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 101/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is running in multiple Google Kubernetes Engine clusters. It is managed by a Deployment in each cluster. The Deployment has
created multiple replicas of your Pod in each cluster. You want to view the logs sent to stdout for all of the replicas in your Deployment in all
clusters.
Correct Answer: D
Selected Answer: B
B is correct
upvoted 5 times
Selected Answer: B
B is correct
upvoted 1 times
The gcloud logging read command reads log entries from the specified logs. It does not allow you to view the logs of specific replicas in a
Deployment across multiple clusters. kubectl logs allows you to view the logs of a specific Pod or Deployment across multiple clusters. You can
specify the Deployment name and the relevant parameters to view the logs of all replicas in the Deployment.
For example, the following command would allow you to view the logs of all replicas in a Deployment named "my-deployment" in all clusters:
Selected Answer: B
Correct answer is B
upvoted 1 times
Selected Answer: B
Correct answer is B
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 102/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Correct answer is B
https://cloud.google.com/logging/docs/reference/tools/gcloud-logging#examples_2
upvoted 2 times
Using the "gcloud logging read" command, select the appropriate cluster, node, pod, and container logs.
https://cloud.google.com/stackdriver/docs/solutions/gke/using-logs#accessing_your_logs
However if you use "kubectl logs" to see logs on CLI, logs won’t be seen readable. It prints each line as a JSON object.
https://medium.com/google-cloud/display-gke-logs-in-a-text-format-with-kubectl-db0169be0282
upvoted 3 times
I would take A
upvoted 1 times
https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running-kubernetes-engine:
"gcloud command line tool – Using the gcloud logging read command, select the appropriate cluster, node, pod and container logs."
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 103/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are using Cloud Build to create a new Docker image on each source code commit to a Cloud Source Repositories repository. Your application
is built on every commit to the master branch. You want to release specific commits made to the master branch in an automated method.
B. Create a build trigger on a Git tag pattern. Use a Git tag convention for new releases.
C. Create a build trigger on a Git branch name pattern. Use a Git branch naming convention for new releases.
D. Commit your source code to a second Cloud Source Repositories repository with a second Cloud Build trigger. Use this repository for new
releases only.
Correct Answer: C
Reference:
https://docs.docker.com/docker-hub/builds/
B is correct answer
upvoted 19 times
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 3 times
I don't know why people are selecting C , the qus says commit to master . C literally does not make sense how commit to a feature branch can
trigger a master build.
upvoted 3 times
Selected Answer: B
Vote B
upvoted 2 times
I would take C since the qn states that the commit is made to a branch
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 104/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/source-repositories/docs/integrating-with-cloud-build#create_a_build_trigger
Both branch name and tag are valid triggers, but the question suggests we want to target specific commits to our master branch, therefore B,
using a tag, seems more legit in this case.
upvoted 10 times
Looks like they're committing to the master branch, hence a tag would make more sense than a branch name.
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 105/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing a schema for a table that will be moved from MySQL to Cloud Bigtable. The MySQL table is as follows:
How should you design a row key for Cloud Bigtable for this table?
Correct Answer: C
correct answer is B
https://cloud.google.com/bigtable/docs/schema-design
upvoted 20 times
For example, your application might need to record performance-related data, such as CPU and memory usage, once per second for a large
number of machines. Your row key for this data could combine an identifier for the machine with a timestamp for the data (for example,
machine_4223421#1425330757685). Keep in mind that row keys are sorted lexicographically."
upvoted 3 times
Selected Answer: B
I would go with B.
upvoted 1 times
The primary key in the MySQL table is a composite key consisting of Account_id and Event_timestamp, so it would make sense to use both of these
values as the row key in Cloud Bigtable. This allows for efficient querying and sorting by both Account_id and Event_timestamp.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 3 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 106/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/bigtable/docs/schema-design#row-keys
It's B because :
"Row keys that start with a timestamp. This pattern causes sequential writes to be pushed onto a single node, creating a hotspot. If you put a
timestamp in a row key, precede it with a high-cardinality value like a user ID to avoid hotspotting."
upvoted 3 times
Selected Answer: B
Include a timestamp as part of your row key and avoid having timestamp at the start of the key
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 107/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You want to view the memory usage of your application deployed on Compute Engine.
Correct Answer: C
Reference:
https://stackoverflow.com/questions/43991246/google-cloud-platform-how-to-monitor-memory-usage-of-vm-instances
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 108/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
For me B si the correct answer as you can not read memory usage directly from stackdriver without the monitoring agent
upvoted 8 times
Selected Answer: B
I would go with B.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 3 times
Vote B
upvoted 1 times
B is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 109/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an analytics application that runs hundreds of queries on BigQuery every few minutes using BigQuery API. You want to find out how
Correct Answer: D
You dont need to enable trace for this, Best and correct option is D
upvoted 7 times
Selected Answer: C
Selected Answer: C
https://www.exam-answer.com/the-best-way-to-measure-query-execution-time-in-bigquery
upvoted 1 times
Selected Answer: C
The correct answer is C. Use Stackdriver Trace to plot query execution time. Stackdriver Trace is a distributed tracing system that allows you to
profile and debug your application's performance. It allows you to trace requests across multiple services, and it provides a detailed breakdown of
where time is being spent within your application. Since you want to find out how much time your queries take to execute, using Stackdriver Trace
to plot query execution time would be the most appropriate approach.
upvoted 1 times
Selected Answer: D
D is correct
https://cloud.google.com/bigquery/docs/monitoring
upvoted 1 times
Selected Answer: D
D is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 110/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing a schema for a Cloud Spanner customer database. You want to store a phone number array field in a customer table. You also
A. Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer.
B. Create a table named Customers. Create a table named Phones. Add a CustomerId field in the Phones table to find the CustomerId from a
phone number.
C. Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer. Create a secondary index on
D. Create a table named Customers as a parent table. Create a table named Phones, and interleave this table into the Customer table. Create
Correct Answer: C
Correct answer is C, as in the question states: "You want to store a phone number array field in a customer table". So... adding the phone number as
array field and adding a secondary index should be the best option in this case.
upvoted 7 times
Selected Answer: D
I will go with D.
upvoted 1 times
Selected Answer: D
D is correct
upvoted 3 times
Selected Answer: D
It's D.
upvoted 1 times
upvoted 1 times
In sql there is array field, and using UNNEST function is possible to filter records based on array, then answer A
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 112/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are deploying a single website on App Engine that needs to be accessible via the URL http://www.altostrat.com/.
A. Verify domain ownership with Webmaster Central. Create a DNS CNAME record to point to the App Engine canonical name
ghs.googlehosted.com.
B. Verify domain ownership with Webmaster Central. Define an A record pointing to the single global App Engine IP address.
C. Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Create a DNS CNAME record to point
D. Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Define an A record pointing to the
Correct Answer: A
Reference:
https://cloud.google.com/appengine/docs/flexible/dotnet/mapping-custom-domains?hl=fa
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
A, I have done that on a project and you don't need to fo routing with a dispatch.yaml file so A is enough to have a custom domain link to your
app engine.
upvoted 1 times
A is correct
upvoted 3 times
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 113/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Vote A
upvoted 2 times
https://cloud.google.com/appengine/docs/flexible/dotnet/mapping-custom-domains?hl=fa:
"In A or AAAA records, the record data is an IP address"
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 114/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are running an application on App Engine that you inherited. You want to find out whether the application is using insecure binaries or is
A. Cloud Amor
B. Stackdriver Debugger
Correct Answer: C
Reference:
https://cloud.google.com/security-scanner
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 3 times
Selected Answer: C
Vote C
upvoted 2 times
https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview:
"Web Security Scanner custom scans provide granular information about application vulnerability findings, like outdated libraries, cross-site
scripting, or use of mixed content"
C is correct
upvoted 1 times
https://cloud.google.com/appengine/docs/standard/python/application-security
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 115/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are working on a social media application. You plan to add a feature that allows users to upload images. These images will be 2 MB `" 1 GB in
size. You want to minimize their infrastructure operations overhead for this feature.
A. Change the application to accept images directly and store them in the database that stores other user information.
B. Change the application to create signed URLs for Cloud Storage. Transfer these signed URLs to the client application to upload images to
Cloud Storage.
C. Set up a web server on GCP to accept user images and create a file store to keep uploaded files. Change the application to retrieve images
D. Create a separate bucket for each user in Cloud Storage. Assign a separate service account to allow write access on each bucket. Transfer
service account credentials to the client application based on user information. The application uses this service account to upload images to
Cloud Storage.
Correct Answer: B
Reference:
https://cloud.google.com/blog/products/storage-data-transfer/uploading-images-directly-to-cloud-storage-by-using-signed-url
Selected Answer: B
B is correct
upvoted 1 times
Selected Answer: B
B is correct
upvoted 3 times
Selected Answer: B
Vote B
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 116/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is built as a custom machine image. You have multiple unique deployments of the machine image. Each deployment is a separate
managed instance group with its own template. Each deployment requires a unique set of configuration values. You want to provide these unique
values to each deployment but use the same custom machine image in all deployments. You want to use out-of-the-box features of Compute
Engine.
C. Place the unique configuration values in the instance template startup script.
D. Place the unique configuration values in the instance template instance metadata.
Correct Answer: A
Reference:
https://cloud.google.com/compute/docs/instance-groups
https://cloud.google.com/compute/docs/instances/startup-scripts:
"A startup script is a file that contains commands that run when a virtual machine (VM) instance boots:
Answer is D
upvoted 7 times
Selected Answer: D
D is correct.
upvoted 1 times
D is correct
upvoted 3 times
Selected Answer: D
Correct answer is D
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 117/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Either be C or D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 118/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application performs well when tested locally, but it runs significantly slower after you deploy it to a Compute Engine instance. You need to
A. File a ticket with Cloud Support indicating that the application performs faster locally.
C. Use Cloud Profiler to determine which functions within the application take the longest amount of time.
D. Add logging commands to the application and use Cloud Logging to check where the latency problem occurs.
Correct Answer: D
Selected Answer: C
C is correct
upvoted 1 times
Selected Answer: C
A is incorrect because the argument “it worked on my machine” but doesn’t work on Google Cloud is never valid.
B is incorrect because Debugger snapshots only lets us review the application at a single point in time.
C is correct because it provides latency per function and historical latency information.
D is incorrect because while it works it requires a lot of work and is not the clear, optimal choice.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 3 times
Selected Answer: C
Correct is C
upvoted 1 times
Correct answer is C
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 119/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application running in App Engine. Your application is instrumented with Stackdriver Trace. The /product-details request reports
details about four known unique products at /sku-details as shown below. You want to reduce the time it takes for the request to complete.
D. Store the /sku-details information in a database, and replace the webservice call with a database query.
Correct Answer: C
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C feels right
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
Selected Answer: C
Vote C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 120/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/appengine/docs/standard/java/datastore/queries:
"This solution is no longer recommended"
upvoted 1 times
https://cloud.google.com/appengine/docs/standard
Back to C as options A & B are wrong as they do not have a direct correlation with the issue and there is nothing to suggest they need to be
increased.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 121/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company has a data warehouse that keeps your application information in BigQuery. The BigQuery data warehouse keeps 2 PBs of user data.
Recently, your company expanded your user base to include EU users and needs to comply with these requirements:
✑ Your company must be able to delete all user account information upon user request.
✑ All EU user data must be stored in a single region specifically for EU users.
Which two actions should you take? (Choose two.)
B. Create a dataset in the EU region that will keep information about EU users only.
C. Create a Cloud Storage bucket in the EU region to store information for EU users only.
D. Re-upload your data using to a Cloud Dataflow pipeline by filtering your user records out.
E. Use DML statements in BigQuery to update/delete user records based on their requests.
Correct Answer: CE
Reference:
https://cloud.google.com/solutions/bigquery-data-warehouse
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language
The link above supports E since "delete all user account information upon user request" as per qn
https://cloud.google.com/architecture/bigquery-data-warehouse:
"A dataset is bound to a location. The dataset locations are as follows:
Multi-regional: A large geographic area, such as the United States, that contains two or more geographic places."
Selected Answer: BE
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 122/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: BE
Selected Answer: BD
BD are correct
upvoted 1 times
Selected Answer: BE
Vote BE
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 123/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
service: production
instance_class: B1
Correct Answer: C
Selected Answer: D
D is correct
upvoted 4 times
https://cloud.google.com/appengine/docs/legacy/standard/python/how-instances-are-managed#scaling_types
upvoted 2 times
https://cloud.google.com/appengine/docs/standard/reference/app-yaml?tab=python#basic_scaling
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 124/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: D
Option D - Max of 5
upvoted 2 times
Selected Answer: D
Answer is D
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 125/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your analytics system executes queries against a BigQuery dataset. The SQL query is executed in batch and passes the contents of a SQL file to
the BigQuery
CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting a permission error from the BigQuery CLI when the
A. Grant the service account BigQuery Data Viewer and BigQuery Job User roles.
B. Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles.
C. Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI.
D. Create a new dataset in BigQuery, and copy the source table to the new dataset Query the new dataset and table from the CLI.
Correct Answer: B
Selected Answer: A
A is correct.
upvoted 1 times
A is correct
upvoted 3 times
Selected Answer: A
According to the best practice - "User should have least privilege i.e. only those permissions which are required to perform an operation" - Option
A is Correct.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 126/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is running on Compute Engine and is showing sustained failures for a small number of requests. You have narrowed the cause
down to a single
Correct Answer: A
We recommend that you review the logs from the serial console for connection errors. You can access the serial console from your local
workstation by using a browser.
Enable read/write access to an instance's serial console, so you can log into the console and troubleshoot problems with the instance. This
approach is useful when you cannot log in with SSH, or if the instance has no connection to the network. The serial console remains accessible
in both of these situations
upvoted 4 times
Selected Answer: B
B is correct.
upvoted 1 times
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 127/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
B is correct, because it will be the first step to check the serial port 22 is responsive or not.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 128/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You configured your Compute Engine instance group to scale automatically according to overall CPU usage. However, your application's response
latency increases sharply before the cluster has finished adding up instances. You want to provide a more consistent latency experience for your
C. Increase the target CPU usage for the instance group autoscaler.
D. Decrease the target CPU usage for the instance group autoscaler.
Correct Answer: AC
Selected Answer: BD
Option B: Decrease the cool-down period for instances added to the group. The cool-down period is the time that must pass before the instance
group autoscaler can add more instances after it has already added instances to the group. Decreasing the cool-down period can allow the
instance group to scale more quickly in response to changes in CPU usage, which may help to reduce latency.
Option D: Decrease the target CPU usage for the instance group autoscaler. The target CPU usage is the average CPU usage that the instance
group autoscaler aims to maintain for the group. Decreasing the target CPU usage may allow the instance group to scale down more quickly in
response to changes in CPU usage, which may also help to reduce latency.
upvoted 4 times
BD are correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 129/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: BD
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 130/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application controlled by a managed instance group. When you deploy a new version of the application, costs should be minimized
and the number of instances should not increase. You want to ensure that, when each new instance is created, the deployment only continues if
Correct Answer: A
Reference:
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
B(maxSurge = 0, maxUnavailable = 1)
upvoted 22 times
B would be correct
upvoted 5 times
Selected Answer: B
maxSurge controls in a rolling update how many resources can be added above threshold of the MIG (managed instance group),
while maxUnavailable controls the max number of instances that can be taken offline during update at the same time
upvoted 1 times
Selected Answer: A
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#max_surge
upvoted 1 times
Selected Answer: A
Option A is best suited here as if we go with option B it does not ensures that deployment only continue if the new instance is healthy.
upvoted 1 times
Selected Answer: A
Performing a rolling update with maxSurge set to 1 and maxUnavailable set to 0 ensures that the deployment only continues if the new instance is
healthy. The maxSurge parameter ensures that only one new instance is created at a time, while the maxUnavailable parameter ensures that the
number of healthy instances does not decrease during the deployment. This will minimize costs by not creating unnecessary instances and will also
ensure that the deployment is safe and does not impact the application's availability.
Option B is incorrect because setting maxUnavailable to 1 would mean that one instance will be taken offline at a time, which could impact the
application's availability during the deployment.
Options C and D are incorrect because maxHealthy and maxUnhealthy are not valid parameters for a rolling update.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 131/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: A
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#max_surge
upvoted 2 times
Selected Answer: B
Selected Answer: B
B is correct
upvoted 2 times
Selected Answer: B
maxSurge specifies the maximum number (or percentage) of pods above the specified number of replicas (is the maximum number of new pods
that will be created at a time) and maxUnavailable is the maximum number of old pods that will be deleted at a time.
If you do not want any unavailable machines during an update, set the maxUnavailable value to 0 and the maxSurge value to greater than 0. With
these settings, Compute Engine removes each old machine only after its replacement new machine is created and running.
upvoted 2 times
Selected Answer: B
Selected Answer: B
Yes it should be B as question states deployment should stop if it is unhealthy… the only we can happen is to make it to 0 for maxSurge =1
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 132/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Note: If you set both maxSurge and maxUnavailable properties and both
properties resolve to 0, the Updater automatically sets maxUnavailable=1, to
ensure that the automated update can always proceed.
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#max_surge
Setting a higher maxSurge value speeds up your update, at the cost of additional instances, which are billed according to the Compute Engine
price sheet.
upvoted 2 times
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#max_unavailable
If you do not want any unavailable machines during an update, set the maxUnavailable value to 0 and the maxSurge value to greater than 0. With
these settings, Compute Engine removes each old machine only after its replacement new machine is created and running.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 133/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application requires service accounts to be authenticated to GCP products via credentials stored on its host Compute Engine virtual machine
instances. You want to distribute these credentials to the host instances as securely as possible.
A. Use HTTP signed URLs to securely provide access to the required resources.
B. Use the instance's service account Application Default Credentials to authenticate to the required resources.
C. Generate a P12 file from the GCP Console after the instance is deployed, and copy the credentials to the host instance before starting the
application.
D. Commit the credential JSON file into your application's source repository, and have your CI/CD process package it with the software that is
Correct Answer: B
Reference:
https://cloud.google.com/compute/docs/api/how-tos/authorization
Selected Answer: B
Option B is Correct: This approach ensures that the credentials are securely managed and automatically provided to the instances when needed.
upvoted 1 times
Selected Answer: A
This approach ensures that the credentials are securely managed and automatically provided to the instances when needed.
upvoted 1 times
Selected Answer: B
Answer B because best practice is to not store file with account service information when possible. With compute engine, the account service of the
vm can be used to call google api if the roles are added to this account service.
upvoted 1 times
Using the instance's service account Application Default Credentials is the most secure method for distributing credentials to the host instances.
This method allows the instance to automatically authenticate with the required resources using the instance's built-in service account, without
requiring the credentials to be stored on the instance or transmitted over the network. This eliminates the risk of the credentials being
compromised or exposed. Additionally, this method is the most convenient, as it requires no manual steps to set up the credentials on the instance.
upvoted 3 times
Selected Answer: B
I think B is correct
upvoted 2 times
Selected Answer: B
I'm also considering this part -- "distribute these credentials to the host instances as securely as possible"
Selected Answer: C
Your application requires service accounts to be authenticated to GCP products via credentials stored on its host Compute Engine virtual machine
instances.
The application requires the credentials to be stored on the VM instance, so I think the application code points to a file stored in the Instance.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 134/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Answer is B
https://cloud.google.com/docs/authentication/production#automatically
If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set, ADC uses the service account that is attached to the resource that is
running your code.
upvoted 4 times
https://cloud.google.com/iam/docs/creating-managing-service-account-keys:
"To use a service account from outside of Google Cloud, such as on other platforms or on-premises, you must first establish the identity of the
service account"
"You can create service account keys in JSON or PKCS#12 (P12) format. "
C is the answer
upvoted 2 times
Answer is B not C
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 135/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is deployed in a Google Kubernetes Engine (GKE) cluster. You want to expose this application publicly behind a Cloud Load
Correct Answer: A
Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
A(https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer)
upvoted 7 times
A)
The important part of the question is this "...expose this application publicly behind a Cloud Load Balancing HTTP(S) load balancer." This means it is
an L7 exposure using HTTPS (a Service of type "LoadBalancer" would only create an L4 exposure - IP only... No HTTPS).
So Ingress is the choice you should make. And in GKE, luckily this is one thing - create an ingress and the LB will be attached automagically ;D
upvoted 5 times
Selected Answer: A
To expose your application publicly behind a Cloud Load Balancing HTTP(S) load balancer in a Google Kubernetes Engine (GKE) cluster, we should
configure a GKE Ingress resource. This approach allows you to define rules for routing external HTTP(S) traffic to internal services based on
hostnames and URL paths.
upvoted 1 times
Configuring a GKE ingress resource is not enough, you also need to expose the service with the type NodePort and then configure the ingress
resource to point to that service.
To configure a GKE Ingress resource, you need to define rules for routing HTTP(S) traffic to the application in the cluster. This is done by creating an
Ingress object, which is associated with one or more Service objects, each of which is associated with a set of Pods. The GKE Ingress controller will
then create a Google Cloud HTTP(S) Load Balancer and configure it according to the information in the Ingress and its associated Services.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 136/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Alternatively, you can configure a GKE Service resource with type: LoadBalancer to expose your application publicly. This will create a Cloud Load
Balancing HTTP(S) load balancer and associate it with the Service. The Service will then route traffic to the application Pods.
upvoted 2 times
Selected Answer: A
A is correct
upvoted 2 times
Selected Answer: A
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
GKE Ingress for HTTP(S) Load Balancing
This page provides a general overview of what Ingress for HTTP(S) Load Balancing is and how it works. Google Kubernetes Engine (GKE) provides a
built-in and managed Ingress controller called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers for
HTTP(S) workloads in GKE.
upvoted 1 times
D is correct
upvoted 2 times
When you create an Ingress object, the GKE Ingress controller creates a Google Cloud HTTP(S) Load Balancer and configures it according to the
information in the Ingress and its associated Services."
upvoted 5 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 137/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost and maintenance of data stored
in HDFS is a major concern for your company. You also want to make minimal changes to existing data analytics jobs and existing architecture.
A. Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery instead of the on-premises
Hadoop environment.
B. Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of your existing environment into
C. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the new Cloud Dataproc cluster.
Move your HDFS data into larger HDD disks to save on storage costs.
D. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to the new cluster. Move your data
to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on that data.
Correct Answer: D
I'd choose D.
upvoted 7 times
https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-overview:
"Keeping your data in a persistent HDFS cluster using Dataproc is more expensive than storing your data in Cloud Storage, which is what we
recommend, as explained later. Keeping data in an HDFS cluster also limits your ability to use your data with other Google Cloud products."
"Google Cloud includes Dataproc, which is a managed Hadoop and Spark environment. You can use Dataproc to run most of your existing jobs
with minimal alteration, so you don't need to move away from all of the Hadoop tools you already know"
D is the answer
upvoted 6 times
Selected Answer: D
Option D is correct.
upvoted 1 times
B is not a feasible option because Compute Engine instances do not have the capability to run HDFS.
C would not allow you to save on storage costs as it involves moving your data to larger HDD disks rather than a more cost-effective storage
solution like Cloud Storage.
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
upvoted 1 times
Your data is stored in Cloud Storage buckets. Fellow developers have reported that data downloaded from Cloud Storage is resulting in slow API
performance.
You want to research the issue to provide details to the GCP support team.
Correct Answer: B
Reference:
https://groups.google.com/forum/#!topic/gce-discussion/xBl9Jq5HDsY
B(https://cloud.google.com/storage/docs/gsutil/commands/perfdiag#providing-diagnostic-output-to-cloud-storage-team)
upvoted 13 times
Selected Answer: B
To research the issue of slow API performance when downloading data from Cloud Storage, you can use the gsutil perfdiag command. This
command runs a set of tests to report the actual performance of a Cloud Storage bucket and provides detailed information on the performance of
individual operations.
upvoted 1 times
The gsutil perfdiag command is used to diagnose performance issues with Cloud Storage. It can be used to perform various tests such as
download, upload, and metadata operations. By using the -o flag, you can specify an output file where the results of the tests will be stored in
JSON format. This output file can then be provided to the GCP support team to help them investigate the issue.
upvoted 2 times
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 139/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are using Cloud Build build to promote a Docker image to Development, Test, and Production environments. You need to ensure that the same
Correct Answer: D
C, since digests are immutable, whilst docker tags are mutable (hence not D).
https://cloud.google.com/architecture/using-container-images
upvoted 19 times
Selected Answer: C
I would go with C.
upvoted 1 times
Selected Answer: C
Using the digest of the Docker image is the most reliable way to ensure that the exact same Docker image is deployed to each environment. The
digest is a hash of the image content and metadata, which is unique to each image. This means that even if the image is tagged with different
versions or names, the digest will remain the same as long as the content and metadata are identical.
On the other hand, using the latest Docker image tag or a semantic version tag may not guarantee that the exact same image is deployed to each
environment. These tags are mutable and can be overwritten or updated, which could result in different images being deployed to different
environments.
Using a unique Docker image name could work, but it may be more difficult to manage and track multiple images with different names, especially
if there are many environments or frequent updates.
upvoted 2 times
Anser C because nees to be sure that the same image for the 3 envs. A tag version can be change between the deployment of the env.
upvoted 1 times
The digest of the Docker image is a unique identifier for the specific version of the image. By using the digest, you can ensure that the same exact
version of the image is deployed to each environment. Using the latest tag or a unique image name may not necessarily guarantee that the same
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 140/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
version is deployed, as these tags may change over time. Using a semantic version tag would only ensure that the same version is deployed if you
follow a strict versioning policy and only update the image by incrementing the patch or minor version number.
upvoted 3 times
Selected Answer: C
C is the answer
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
Answer is D
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 141/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company has created an application that uploads a report to a Cloud Storage bucket. When the report is uploaded to the bucket, you want to
publish a message to a Cloud Pub/Sub topic. You want to implement a solution that will take a small amount to effort to implement.
A. Configure the Cloud Storage bucket to trigger Cloud Pub/Sub notifications when objects are modified.
B. Create an App Engine application to receive the file; when it is received, publish a message to the Cloud Pub/Sub topic.
C. Create a Cloud Function that is triggered by the Cloud Storage bucket. In the Cloud Function, publish a message to the Cloud Pub/Sub
topic.
D. Create an application deployed in a Google Kubernetes Engine cluster to receive the file; when it is received, publish a message to the Cloud
Pub/Sub topic.
Correct Answer: C
Reference:
https://cloud.google.com/storage/docs/pubsub-notifications
Since one of reqs is "You want to implement a solution that will take a small amount to effort to implement" I'd choose A because no code has to
be written. However option C works great as well and is recommended by https://cloud.google.com/storage/docs/pubsub-
notifications#other_notification_options.
upvoted 16 times
Selected Answer: A
A is correct.
upvoted 1 times
I think A is correct
upvoted 3 times
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 143/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your teammate has asked you to review the code below, which is adding a credit to an account balance in Cloud Datastore.
Correct Answer: A
Selected Answer: B
It is a good practice to perform a get and put within a transaction to ensure that the update to the balance is atomic. This prevents other processes
from reading the balance and making updates to it simultaneously, which could lead to incorrect or inconsistent results. By using a transaction,
your teammate can ensure that the balance is updated correctly and consistently.
upvoted 3 times
Selected Answer: B
B is correct
upvoted 3 times
Selected Answer: B
B is the answer
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 144/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company stores their source code in a Cloud Source Repositories repository. Your company wants to build and test their code on each source
code commit to the repository and requires a solution that is managed and has minimal operations overhead.
A. Use Cloud Build with a trigger configured for each source code commit.
B. Use Jenkins deployed via the Google Cloud Platform Marketplace, configured to watch for source code commits.
C. Use a Compute Engine virtual machine instance with an open source continuous integration tool, configured to watch for source code
commits.
D. Use a source code commit trigger to push a message to a Cloud Pub/Sub topic that triggers an App Engine service to build the source
code.
Correct Answer: A
https://cloud.google.com/build/docs/automating-builds/create-manage-
triggers#:~:text=A%20Cloud%20Build%20trigger%20automatically,changes%20that%20match%20certain%20criteria.
A is the answer
upvoted 6 times
A. Use Cloud Build with a trigger configured for each source code commit.
Cloud Build is a fully managed service for building, testing, and deploying software quickly. It integrates with Cloud Source Repositories and can be
triggered by source code commits, which makes it an ideal solution for building and testing code on each commit. It requires minimal operations
overhead as it is fully managed by Google Cloud.
upvoted 1 times
Selected Answer: A
A is correct
upvoted 3 times
Selected Answer: A
Cloud Build
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 145/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are writing a Compute Engine hosted application in project A that needs to securely authenticate to a Cloud Pub/Sub topic in project B.
A. Configure the instances with a service account owned by project B. Add the service account as a Cloud Pub/Sub publisher to project A.
B. Configure the instances with a service account owned by project A. Add the service account as a publisher on the topic.
C. Configure Application Default Credentials to use the private key of a service account owned by project B. Add the service account as a
D. Configure Application Default Credentials to use the private key of a service account owned by project A. Add the service account as a
Correct Answer: B
I vote for B.
upvoted 13 times
Selected Answer: B
I would go with B.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 146/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
B is the answer
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 147/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a corporate tool on Compute Engine for the finance department, which needs to authenticate users and verify that they are in
A. Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group containing users in the finance
department. Verify the provided JSON Web Token within the application.
B. Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group containing users in the finance
department. Issue client-side certificates to everybody in the finance team and verify the certificates in the application.
C. Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Verify the provided JSON Web Token within
the application.
D. Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Issue client side certificates to everybody in
Correct Answer: C
https://cloud.google.com/armor/docs/security-policy-
overview#:~:text=Google%20Cloud%20Armor%20security%20policies%20enable%20you%20to%20allow%20or,Private%20Cloud%20(VPC)%20net
works.:
"Google Cloud Armor security policies protect your application by providing Layer 7 filtering and by scrubbing incoming requests for common web
attacks or other Layer 7 attributes to potentially block traffic before it reaches your load balanced backend services or backend buckets"
https://cloud.google.com/endpoints/docs/openapi/authenticating-users-google-id:
"To authenticate a user, a client application must send a JSON Web Token (JWT) in the authorization header of the HTTP request to your backend
API"
A is correct
upvoted 5 times
Selected Answer: A
A is correct.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 148/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
A is correct
upvoted 2 times
Community choice is A
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 149/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of your API.
Correct Answer: CE
Selected Answer: AC
solution is AC:
https://cloud.google.com/trace/docs/zipkin
upvoted 1 times
Selected Answer: AC
The two steps you should take to generate reports for the network latency of your API running on multiple cloud providers are:
A. Use Zipkin collector to gather data: Zipkin is a distributed tracing system that helps you gather data about the latency of requests made to your
API. It allows you to trace requests as they flow through your system, and provides insight into the performance of your services. You can use
Zipkin collectors to collect data from multiple cloud providers, and then generate reports to analyze the latency of your API.
C. Use Stackdriver Trace to generate reports: Stackdriver Trace is a distributed tracing system that helps you trace requests across multiple services
and provides detailed performance data about your applications. It allows you to visualize and analyze the performance of your API and its
dependencies. You can use Stackdriver Trace to generate reports about the network latency of your API running on multiple cloud providers.
Using Zipkin collector will allow you to gather data from your instrumented application running on multiple cloud providers. Stackdriver Trace can
then be used to generate reports based on this data.
Option B, using Fluentd agent, is not related to generating reports on network latency for an API.
Option D, using Stackdriver Debugger, is not related to generating reports on network latency for an API.
Option E, using Stackdriver Profiler, is not related to generating reports on network latency for an API.
upvoted 1 times
Selected Answer: AC
A/C
https://cloud.google.com/trace/docs/zipkin
upvoted 1 times
BD are correct
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 150/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: AC
https://cloud.google.com/trace/docs/zipkin#frequently_asked_questions
use a Zipkin server to receive traces from Zipkin clients and forward those traces to Cloud Trace for analysis.
upvoted 2 times
https://cloud.google.com/trace/docs/zipkin:
"receive traces from Zipkin clients and forward those traces to Cloud Trace for analysis."
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 151/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 152/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
A. BigQuery
B. Cloud SQL
C. Cloud Spanner
D. Cloud Datastore
Correct Answer: C
In the case study is stated: "Obtain user activity metrics to better understand how to monetize their product", which means that they'll need to
analyse the user activity, so... I'll go with answer A (BigQuery)
upvoted 13 times
Selected Answer: D
Selected Answer: A
If it was manage user states, I would consider D (Datastore/Firestore) but this is not the case.
So, from my point of view, A is the correct answer.
upvoted 1 times
A is correct
upvoted 1 times
Bigquery for user activity analysis . And also the user activity is kind of raw data which being used to segment user or according age , choice etc so
Bigquery fits best fr this use cases
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 153/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/datastore/docs/concepts/overview#what_its_good_for
I feel like this is a toss up between these two since we're talking about user profiles/data, would vote for D here bc MySQL offers a really rigid
schema and isn't well suited to massive scaling either.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 154/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 155/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
Correct Answer: C
It depends on which authentication we are talking about. If it is an authentication to internal app, the answer is C (with specific tag). If it is an
authentication to 'the' app that HipLocal offers to general users, the answer is D (with tag, all users outside that tag will be rejected). It is not clear
to me, on which tag we are talking here.
upvoted 6 times
Selected Answer: C
C is correct
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 156/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 157/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
A. Use the Cloud Data Loss Prevention API for redaction of the review dataset.
B. Use the Cloud Data Loss Prevention API for de-identification of the review dataset.
C. Use the Cloud Natural Language Processing API for redaction of the review dataset.
D. Use the Cloud Natural Language Processing API for de-identification of the review dataset.
Correct Answer: D
Answer is B . Data loss prevention api is used for de-identification not natural language api
upvoted 18 times
Selected Answer: D
https://www.exam-answer.com/hiplocal-data-preparation
"De-identification is the process of removing or obfuscating personally identifiable information (PII) from a dataset, so that individuals cannot be
identified. In this case, the data science team needs to analyze user reviews, which could potentially contain PII such as names, email addresses, or
other personal information. To protect the privacy of the users, the data should be de-identified before it is analyzed.
The Cloud Natural Language Processing API provides various features such as entity recognition, sentiment analysis, and syntax analysis. The API
also includes a feature for de-identification, which can be used to remove PII from text data. This feature uses machine learning models to identify
and mask or replace PII in the text.
In contrast, the Cloud Data Loss Prevention API is designed to identify and redact sensitive data, such as credit card numbers, social security
numbers, or other types of PII. It is not intended for general de-identification of text data."
upvoted 1 times
D is correct.
upvoted 1 times
Selected Answer: D
The Cloud Natural Language Processing API can help to extract insights from the user reviews, such as sentiment analysis and entity recognition.
Additionally, de-identification can help to protect user privacy by removing any personal information from the review data.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 158/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
reduction protects user's data even more, whereas de-identification might be better for analyzing the data and link them together, right?
upvoted 1 times
Data Loss Prevention or DLP is not meant for analytics so A and B are wrong while de-identification is for DLP
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 159/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 160/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
* Logging must be increased, and data should be stored in a cloud analytics platform.
In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to?
A. Cloud Spanner
B. Cloud Datastore
Correct Answer: A
https://cloud.google.com/blog/products/databases/spanner-relational-database-for-all-size-applications-faqs
https://cloud.google.com/architecture/best-practices-cloud-spanner-gaming-database#select_a_data_locality_to_meet_compliance_requirements
https://cloud.google.com/blog/products/gcp/introducing-cloud-spanner-a-global-database-service-for-mission-critical-applications
A. Cloud Spanner
- global service
- supports durably store application data
- supports GDPR, to meet data locality
upvoted 8 times
Selected Answer: A
A is best suited.
upvoted 1 times
Selected Answer: A
I think A
upvoted 1 times
Cloud Spanner is a highly scalable, globally-distributed database service offered by Google Cloud, but it may not be the best fit for HipLocal's
needs. While Cloud Spanner provides automatic and instant scaling, strong consistency guarantees, and high availability, it also comes with a
higher operational overhead and cost compared to other Google Cloud databases. Additionally, Cloud Spanner is designed for large, mission-
critical applications that require strict consistency guarantees across multiple regions, which may not be necessary for HipLocal's current
requirements.
In this case, it would be more appropriate for HipLocal to separate Cloud SQL clusters for each region to store their application state, as this
solution would provide the necessary data storage capabilities and be more cost-effective for their current requirements.
upvoted 1 times
A is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 161/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 162/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application deployed in production. When a new version is deployed, you want to ensure that all production traffic is routed to the new
version of your application. You also want to keep the previous version deployed so that you can revert to it if there is an issue with the new
version.
A. Blue/green deployment
B. Canary deployment
C. Rolling deployment
D. Recreate deployment
Correct Answer: C
Selected Answer: A
Selected Answer: A
The difference between canary deployment and blue/green deployment is the presence or absence of a testing process.
https://www.sedesign.co.jp/dxinsight/what-is-canary-release
Since there is no testing process in the question, I vote for Blue/Green Deployment.
For more information on each development, please click here.
https://garafu.blogspot.com/2018/11/release-strategy.html
It is in Japanese, so please translate and read it.
upvoted 2 times
Selected Answer: B
B is correct
upvoted 1 times
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 163/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
no... qn 9 is canary. because it wants to minimize the impact, you runs two versions together for testing. This one wants all traffic use the new
version. so, it can be A or C. A is recommended and clear cut; BUT costly. C is cheaper; but, it does not routed all to new version IMMEDATELY.
so. I pick A, since, question did not mention cost.
upvoted 5 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 164/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are porting an existing Apache/MySQL/PHP application stack from a single machine to Google
Kubernetes Engine. You need to determine how to containerize the application. Your approach should follow Google-recommended best practices
for availability.
A. Package each component in a separate container. Implement readiness and liveness probes.
B. Package the application in a single container. Use a process management tool to manage each component.
C. Package each component in a separate container. Use a script to orchestrate the launch of the components.
D. Package the application in a single container. Use a bash script as an entrypoint to the container, and then spawn each component as a
background job.
Correct Answer: D
Reference:
https://cloud.google.com/architecture/best-practices-for-building-containers
A. Package each component in a separate container. Implement readiness and liveness probes.
This is the recommended approach for containerizing an application for use on Kubernetes. By packaging each component in a separate container,
you can ensure that each component is isolated and can be managed independently. You can then use readiness and liveness probes to monitor
the health and availability of each component, which will help ensure the overall availability of the application.
upvoted 6 times
This option is not recommended because it does not follow best practices for containerization. By packaging the entire application in a single
container, you would not be able to manage the individual components of the application independently, which could make it more difficult to
ensure their availability.
upvoted 1 times
This option is not recommended because it does not follow best practices for containerization. While packaging each component in a separate
container is a good approach, using a script to orchestrate the launch of the components is not an effective way to ensure their availability.
Instead, you should use readiness and liveness probes to monitor the health and availability of each component.
upvoted 1 times
This option is not recommended because it does not follow best practices for containerization. By packaging the entire application in a single
container, you would not be able to manage the individual components of the application independently, which could make it more difficult to
ensure their availability. Additionally, using a bash script to spawn each component as a background job is not an effective way to manage and
monitor the availability of the components.
upvoted 1 times
Selected Answer: A
Selected Answer: A
A is the answer.
https://cloud.google.com/architecture/best-practices-for-building-containers#package_a_single_app_per_container
When you start working with containers, it's a common mistake to treat them as virtual machines that can run many different things
simultaneously. A container can work this way, but doing so reduces most of the advantages of the container model. For example, take a classic
Apache/MySQL/PHP stack: you might be tempted to run all the components in a single container. However, the best practice is to use two or three
different containers: one for Apache, one for MySQL, and potentially one for PHP if you are running PHP-FPM.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 165/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Because a container is designed to have the same lifecycle as the app it hosts, each of your containers should contain only one app. When a
container starts, so should the app, and when the app stops, so should the container. The following diagram shows this best practice.
https://cloud.google.com/architecture/best-practices-for-building-containers#package_a_single_app_per_container
Answer A
upvoted 1 times
Selected Answer: A
A is correct
upvoted 2 times
According to me "A" is the correct answer, because the best practice says "classic Apache/MySQL/PHP stack: you might be tempted to run all the
components in a single container. However, the best practice is to use two or three different containers: one for Apache, one for MySQL, and
potentially one for PHP if you are running PHP-FPM."
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 166/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application that will be launched on Compute Engine instances into multiple distinct projects, each corresponding to the
environments in your software development process (development, QA, staging, and production). The instances in each project have the same
application code but a different configuration. During deployment, each instance should receive the application's configuration based on the
environment it serves. You want to minimize the number of steps to configure this flow. What should you do?
A. When creating your instances, configure a startup script using the gcloud command to determine the project name that indicates the
correct environment.
B. In each project, configure a metadata key ג€environmentג€ whose value is the environment it serves. Use your deployment tool to query the
instance metadata and configure the application based on the ג€environmentג€ value.
C. Deploy your chosen deployment tool on an instance in each project. Use a deployment job to retrieve the appropriate configuration file from
your version control system, and apply the configuration when deploying the application on each instance.
D. During each instance launch, configure an instance custom-metadata key named ג€environmentג€ whose value is the environment the
instance serves. Use your deployment tool to query the instance metadata, and configure the application based on the ג€environmentג€ value.
Correct Answer: D
Reference:
https://cloud.google.com/compute/docs/metadata/overview
Selected Answer: B
https://cloud.google.com/compute/docs/metadata/setting-custom-metadata#set-custom-project-wide-metadata
upvoted 1 times
Selected Answer: B
Selected Answer: B
B is the answer.
https://cloud.google.com/compute/docs/metadata/setting-custom-metadata#set-custom
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 167/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
B is correct
upvoted 2 times
Vote B
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 168/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an ecommerce application that stores customer, order, and inventory data as relational tables inside Cloud Spanner. During a
recent load test, you discover that Spanner performance is not scaling linearly as expected. Which of the following is the cause?
D. The use of LIKE instead of STARTS_WITH keyword for parameterized SQL queries.
Correct Answer: B
C is correct https://cloud.google.com/spanner/docs/schema-and-data-model#choosing_a_primary_key
upvoted 8 times
Selected Answer: C
B, the use of the STRING data type for arbitrary-precision values, is not likely to cause performance issues in Cloud Spanner.
D, the use of LIKE instead of STARTS_WITH keyword for parameterized SQL queries, is not likely to cause performance issues in Cloud Spanner.
upvoted 1 times
C is the answer.
https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots
Schema design best practice #1: Do not choose a column whose value monotonically increases or decreases as the first key part for a high write
rate table.
upvoted 1 times
C is correct
upvoted 2 times
Selected Answer: C
Community choice is C
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 169/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application that reads credit card data from a Pub/Sub subscription. You have written code and completed unit testing. You
Pub/Sub integration before deploying to Google Cloud. What should you do?
A. Create a service to publish messages, and deploy the Pub/Sub emulator. Generate random content in the publishing service, and publish to
the emulator.
B. Create a service to publish messages to your application. Collect the messages from Pub/Sub in production, and replay them through the
publishing service.
C. Create a service to publish messages, and deploy the Pub/Sub emulator. Collect the messages from Pub/Sub in production, and publish
D. Create a service to publish messages, and deploy the Pub/Sub emulator. Publish a standard set of testing messages from the publishing
Correct Answer: D
I vote D
upvoted 6 times
Selected Answer: D
https://cloud.google.com/pubsub/docs/emulator
upvoted 1 times
D is correct
upvoted 1 times
D is the answer.
upvoted 1 times
Selected Answer: D
Vote D
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 170/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing an application that will subscribe to and receive messages from a single Pub/Sub topic and insert corresponding rows into a
database. Your application runs on Linux and leverages preemptible virtual machines to reduce costs. You need to create a shutdown script that
A. Write a shutdown script that uses inter-process signals to notify the application process to disconnect from the database.
B. Write a shutdown script that broadcasts a message to all signed-in users that the Compute Engine instance is going down and instructs
C. Write a shutdown script that writes a file in a location that is being polled by the application once every five minutes. After the file is read,
D. Write a shutdown script that publishes a message to the Pub/Sub topic announcing that a shutdown is in progress. After the application
Correct Answer: C
Reference:
https://cloud.google.com/compute/docs/shutdownscript
Selected Answer: A
IMO it should be A
upvoted 7 times
I vote A
upvoted 5 times
Selected Answer: A
I vote A:
- https://cloud.google.com/compute/docs/instances/preemptible#preemption
- https://cloud.google.com/compute/docs/shutdownscript
Option D is not good as we only have a SINGLE pub/sub topic that is also receiving other messages. I wouldn't rely on the new (shutdown)
message to come through and be read by the app timely to disconnect from the db.
upvoted 1 times
Option D is a suitable approach for initiating a graceful shutdown in a scenario where the application needs to receive a notification to disconnect
from the database before the virtual machine is preempted. Here's how the process works:
upvoted 1 times
Publish a Message to Pub/Sub: In the shutdown script, publish a message to the Pub/Sub topic, indicating that a shutdown is in progress. This
message serves as a notification to the application.
Application Subscription: The application subscribes to the Pub/Sub topic and continuously listens for incoming messages
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 171/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
I will go with A
upvoted 1 times
Selected Answer: A
https://cloud.google.com/compute/docs/instances/preemptible#preemption
upvoted 1 times
Selected Answer: A
A is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 172/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You work for a web development team at a small startup. Your team is developing a Node.js application using Google Cloud services, including
Cloud Storage and Cloud Build. The team uses a Git repository for version control. Your manager calls you over the weekend and instructs you to
make an emergency update to one of the company's websites, and you're the only developer available. You need to access Google Cloud to make
the update, but you don't have your work laptop. You are not allowed to store source code locally on a non-corporate computer. How should you
A. Use a text editor and the Git command line to send your source code updates as pull requests from a public computer.
B. Use a text editor and the Git command line to send your source code updates as pull requests from a virtual machine running on a public
computer.
C. Use Cloud Shell and the built-in code editor for development. Send your source code updates as pull requests.
D. Use a Cloud Storage bucket to store the source code that you need to edit. Mount the bucket to a public computer as a drive, and use a
code editor to update the code. Turn on versioning for the bucket, and point it to the team's Git repository.
Correct Answer: A
Reference:
https://docs.github.com/en/[email protected]/get-started/quickstart/contributing-to-projects
Selected Answer: C
C is correct
upvoted 1 times
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C is the answer.
https://cloud.google.com/shell/docs
loud Shell is an interactive shell environment for Google Cloud that lets you learn and experiment with Google Cloud and manage your projects
and resources from your web browser.
With Cloud Shell, the Google Cloud CLI and other utilities you need are pre-installed, fully authenticated, up-to-date, and always available when
you need them. Cloud Shell comes with a built-in code editor with an integrated Cloud Code experience, allowing you to develop, build, debug,
and deploy your cloud-based apps entirely in the cloud.
upvoted 2 times
C is correct
upvoted 2 times
Selected Answer: C
Vote C
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 173/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 174/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team develops services that run on Google Kubernetes Engine. You need to standardize their log data using Google-recommended practices
and make the data more useful in the fewest number of steps. What should you do? (Choose two.)
B. Create aggregated exports on application logs to Cloud Storage to facilitate log analytics.
C. Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs.
D. Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging.
E. Mandate the use of the Pub/Sub API to write structured data to Pub/Sub and create a Dataflow streaming pipeline to normalize logs and
Correct Answer: AE
I go for A, C
upvoted 9 times
Selected Answer: AC
Option A to “make the data more useful”, as BigQuery will allow us to use big data analysis capabilities on the stored logs:
https://cloud.google.com/logging/docs/export/aggregated_sinks#supported-destinations
Option C to “to standardize their log data” creating structured logs: https://cloud.google.com/kubernetes-engine/docs/concepts/about-
logs#best_practices
Option D is also a viable solution but C is preferred, considering the “fewest number of steps” requirement.
Choosing C and D together makes no sense, as both aim to achieve the same goal.
upvoted 1 times
This practice allows you to use structured logs, specifically in JSON format, making it easier to parse and analyze log data.
Cloud Logging can ingest logs from standard output, and structured logs enhance the usability of log data.
Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging:
Using the Logging API allows your applications to send structured log data directly to Cloud Logging.
Structured logs provide more context and are easier to filter, search, and analyze within Cloud Logging.
upvoted 1 times
Option A: Create aggregated exports on application logs to BigQuery. This will facilitate log analytics by exporting application logs to BigQuery,
which is a fully-managed, serverless data warehouse. BigQuery allows you to perform advanced analytics on your log data, including running
complex queries and visualizing the results.
Option C: Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs. This approach
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 175/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
involves writing log output to standard output in a specific format (single-line JSON) that can be easily ingested by Cloud Logging. By using
structured logs, you can take advantage of advanced querying and filtering capabilities provided by Cloud Logging.
upvoted 1 times
Selected Answer: AC
https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs#best_practices
upvoted 1 times
Selected Answer: CD
CD. Only C and D mentioned Cloud Logging. Other options involve extra steps and won't come out free.
"When you create a new GKE cluster, Cloud Operations for GKE integration with Cloud Logging and Cloud Monitoring is enabled by default."
https://cloud.google.com/stackdriver/docs/solutions/gke/managing-
logs#:~:text=When%20you%20create%20a%20new%20GKE%20cluster%2C%20Cloud%20Operations%20for%20GKE%20integration%20with%20Cl
oud%20Logging%20and%20Cloud%20Monitoring%20is%20enabled%20by%20default.
upvoted 1 times
Selected Answer: AC
fewest number of steps -> i believe this sentence is the key. option D would take take.
also: https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs#best_practices
upvoted 1 times
Selected Answer: AD
a d is correct
upvoted 2 times
Selected Answer: AC
AC is the answer.
upvoted 1 times
Selected Answer: CD
If log data is to be standardized and made more useful with minimal steps, I think CD is the right answer.
https://cloud.google.com/logging/docs/structured-logging
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 176/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/logging/docs/structured-logging
If you're using Google Kubernetes Engine or the App Engine flexible environment, you can write structured logs as JSON objects serialized on a
single line to stdout or stderr. The Logging agent then sends the structured logs to Cloud Logging as the jsonPayload of the LogEntry structure.
This point to C
upvoted 4 times
Selected Answer: AD
AD are correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 177/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing a deployment technique for your new applications on Google Cloud. As part of your deployment planning, you want to use live
traffic to gather performance metrics for both new and existing applications. You need to test against the full production load prior to launch.
Correct Answer: A
Reference:
https://cloud.google.com/architecture/application-deployment-and-testing-strategies
Selected Answer: D
Canary will only redirect a small portion of the traffic, while A/B with mirroring will test the new version in full load
upvoted 1 times
Selected Answer: D
A/B testing with traffic mirroring during deployment. This technique allows you to divert a portion of the live production traffic to the new
application version while still serving the majority of the traffic to the existing version. By comparing the performance metrics of both versions
under real-world conditions, you can assess the impact of the new deployment on your application’s performance and stability.
upvoted 1 times
Selected Answer: D
"You need to test against the full production load prior to launch" It's impossible with canary.
"A/B testing with traffic mirroring during deployment" is the only one possibility we have to test the entire traffic before the roll out.
upvoted 3 times
Selected Answer: D
D is the answer.
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
Selected Answer: D
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 178/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
Vote A
upvoted 1 times
Canary deployment is a technique to reduce the risk of introducing a software update in production by slowly rolling out the change to a small
subset of users before making it available to everybody.
This deployment technique is one where the SRE of an application development team relies on a router or load balancer to target individual routes.
They target a small fragment of the overall user base with the newer version of the application. Once this new set of users are have used the
application important metrics will be collected and analyzed to decide whether the new update is good for a full scale rolled to all the users or
whether it needs to be rolled back for further troubleshooting.
upvoted 2 times
Selected Answer: D
you want to use live traffic to gather performance metrics for both new and existing applications
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 179/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You support an application that uses the Cloud Storage API. You review the logs and discover multiple HTTP 503 Service Unavailable error
API. Your application logs the error and does not take any further action. You want to implement Google-recommended retry logic to improve
success rates.
Correct Answer: C
Selected Answer: C
Selected Answer: C
Selected Answer: C
C is the answer.
https://cloud.google.com/storage/docs/retry-strategy#exponential-backoff
Truncated exponential backoff is a standard error handling strategy for network applications in which a client periodically retries a failed request
with increasing delays between requests.
An exponential backoff algorithm retries requests exponentially, increasing the waiting time between retries up to a maximum backoff time.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 180/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to redesign the ingestion of audit events from your authentication service to allow it to handle a large increase in traffic. Currently, the
audit service and the authentication system run in the same Compute Engine virtual machine. You plan to use the following Google Cloud tools in
✑ Multiple Compute Engine machines, each running an instance of the authentication service
✑ Multiple Compute Engine machines, each running an instance of the audit service
✑ Pub/Sub to send the events from the authentication services.
How should you set up the topics and subscriptions to ensure that the system can handle a large volume of messages and can scale efficiently?
A. Create one Pub/Sub topic. Create one pull subscription to allow the audit services to share the messages.
B. Create one Pub/Sub topic. Create one pull subscription per audit service instance to allow the services to share the messages.
C. Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to a load balancer in front of the audit services.
D. Create one Pub/Sub topic per authentication service. Create one pull subscription per topic to be used by one audit service.
E. Create one Pub/Sub topic per authentication service. Create one push subscription per topic, with the endpoint pointing to one audit
service.
Correct Answer: D
https://cloud.google.com/pubsub/docs/subscriber
"Multiple subscribers can make pull calls to the same "shared" subscription. Each subscriber will receive a subset of the messages."
Response is A.
With C and D you can't scale efficiently, because you have to create a topic for each new instance of the authentication service.
upvoted 12 times
Selected Answer: A
Option E is a more scalable and efficient solution for handling a large volume of messages and scaling efficiently.
upvoted 1 times
Selected Answer: A
I would go with A.
upvoted 1 times
Single Topic: Having one Pub/Sub topic keeps things simpler and allows all authentication service instances to publish to the same topic.
Push Subscription with Load Balancer: This allows incoming messages to be distributed among all available audit service instances. The load
balancer would handle distributing the load, making it easier for the audit service to scale out as needed.
Option C ensures both scalability and efficient handling of a large volume of messages.
upvoted 1 times
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 181/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
A is correct. This is the most flexible way to scale, allowing the authentication and audit services to be sized independently according to load.
B is incorrect. This will cause messages to be duplicated, one copy per subscription.
C is incorrect. This will allow the system to scale, but push subscriptions are less suited to handle large volumes of messages.
D is incorrect. This will allow the system to scale, however each audit service will listen to all subscriptions.
E. is incorrect. This will allow the system to scale, however it will require each audit service to listen to all subscriptions. Also push subscriptions are
less suited to handle large volumes of messages.
upvoted 4 times
B, in which there is only one topic and one pull subscription per audit service, would also not allow the audit services to scale horizontally, as
they would all be pulling messages from the same topic
upvoted 1 times
A is the answer.
upvoted 1 times
Selected Answer: A
A is correct I think
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 182/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a marquee stateless web application that will run on Google Cloud. The rate of the incoming user traffic is expected to be
unpredictable, with no traffic on some days and large spikes on other days. You need the application to automatically scale up and down, and you
need to minimize the cost associated with running the application. What should you do?
A. Build the application in Python with Firestore as the database. Deploy the application to Cloud Run.
B. Build the application in C# with Firestore as the database. Deploy the application to App Engine flexible environment.
C. Build the application in Python with CloudSQL as the database. Deploy the application to App Engine standard environment.
D. Build the application in Python with Firestore as the database. Deploy the application to a Compute Engine managed instance group with
autoscaling.
Correct Answer: C
both Cloud run (A) and App Engine Standard (C) can scale to zero, so we need to find out the correct DB Firestore vs CloudSQL
since we don’t know any details if data structures require relational or noSQL, I’d go for Firestore because it is more flexible in scalability than
CloudSQL, and also you only pay per storage usage + operations
upvoted 11 times
Selected Answer: A
I agree with A as it is the only one fits for scale up and down =
upvoted 6 times
Selected Answer: A
Option A is a suitable choice for building a stateless web application with unpredictable traffic, aiming to automatically scale up and down while
minimizing costs
upvoted 1 times
Selected Answer: A
Selected Answer: A
Both Cloud Run and App Engine Standard Environment allow scaling to zero (which minimize the cost), but Cloud SQL can't be minimized to zero
while firestore is measured based on CPU usage.
So from the cost point of view, A is the answer
upvoted 1 times
B and C, which involve deploying the application to App Engine, may also allow the application to automatically scale, but they may not be as cost-
effective as Cloud Run. Option D, which involves deploying the application to a Compute Engine managed instance group, would allow the
application to automatically scale, but it would not be as cost-effective as Cloud Run, as you would have to pay for the resources that you use even
when there is no traffic.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 183/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
Selected Answer: A
A is the answer.
upvoted 1 times
Selected Answer: A
A is correct
upvoted 2 times
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 184/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have written a Cloud Function that accesses other Google Cloud resources. You want to secure the environment using the principle of least
A. Create a new service account that has Editor authority to access the resources. The deployer is given permission to get the access token.
B. Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to get the access
token.
C. Create a new service account that has Editor authority to access the resources. The deployer is given permission to act as the new service
account.
D. Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to act as the new
service account.
Correct Answer: D
Reference:
https://cloud.google.com/blog/products/application-development/least-privilege-for-cloud-functions-using-cloud-iam
Agree with D
upvoted 7 times
Selected Answer: D
https://cloud.google.com/functions/docs/securing/function-identity#individual
upvoted 1 times
Selected Answer: B
This approach allows you to create a service account with a custom IAM role that provides only the necessary permissions required by your Cloud
Function. By granting the deployer permission to get the access token, you ensure that they can obtain the necessary credentials to deploy and
manage the Cloud Function.
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/functions/docs/securing/function-identity#per-function_identity
upvoted 2 times
Selected Answer: D
D is correct
upvoted 2 times
modes, such as an API key, OAuth 2.0 client, or service account key."
and
"Note: In order to deploy a function with a user-managed service account, the deployer must have the iam.serviceAccounts.actAs permission
on the service account being deployed."
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 186/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a SaaS provider deploying dedicated blogging software to customers in your Google Kubernetes Engine (GKE) cluster. You want to
configure a secure multi-tenant platform to ensure that each customer has access to only their own blog and can't affect the workloads of other
B. Deploy a namespace per tenant and use Network Policies in each blog deployment.
C. Use GKE Audit Logging to identify malicious containers and delete them on discovery.
D. Build a custom image of the blogging software and use Binary Authorization to prevent untrusted image deployments.
Correct Answer: B
Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview
Option B is correct
https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview
upvoted 7 times
Selected Answer: B
This approach involves creating a separate namespace for each customer (tenant) and using Network Policies to enforce isolation between the
namespaces. By deploying a namespace per tenant, you can ensure that each customer has access only to their own blog and cannot affect the
workloads of other customers.
upvoted 1 times
https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview
upvoted 1 times
B is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview#what_is_multi-tenancy
Although Kubernetes cannot guarantee perfectly secure isolation between tenants, it does offer features that may be sufficient for specific use
cases. You can separate each tenant and their Kubernetes resources into their own namespaces. You can then use policies to enforce tenant
isolation. Policies are usually scoped by namespace and can be used to restrict API access, to constrain resource usage, and to restrict what
containers are allowed to do.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 187/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have decided to migrate your Compute Engine application to Google Kubernetes Engine. You need to build a container image and push it to
Artifact Registry using Cloud Build. What should you do? (Choose two.)
A. Run gcloud builds submit in the directory that contains the application source code.
B. Run gcloud run deploy app-name --image gcr.io/$PROJECT_ID/app-name in the directory that contains the application source code.
C. Run gcloud container images add-tag gcr.io/$PROJECT_ID/app-name gcr.io/$PROJECT_ID/app-name:latest in the directory that contains
D. In the application source directory, create a file named cloudbuild.yaml that contains the following contents:
E. In the application source directory, create a file named cloudbuild.yaml that contains the following contents:
Correct Answer: BD
Selected Answer: AD
Run gcloud builds submit in the directory that contains the application source code. This command will trigger Cloud Build to build the container
image and push it to Artifact Registry.
In the application source directory, create a file named cloudbuild.yaml that contains the instructions for building and pushing the container image.
The file should contain the following steps:
steps:
-name: 'grc.io/cloud-builders/docker'
args: ['build','-t','grc.io/$PROJECT_ID','app-name','.']
-name: 'grc.io/cloud-builders/docker'
args: ['push','grc.io/$PROJECT_ID/app-name']
This file will be used by Cloud Build to build and push the container image.
upvoted 2 times
C is incorrect because it uses the gcloud container images add-tag command, which is used to add a tag to an existing container image in
Container Registry, not to build and push a new container image to Artifact Registry.
E is incorrect because it uses the gcloud app deploy command, which is used to deploy an application to App Engine, not to build and push a
container image to Artifact Registry.
upvoted 2 times
Selected Answer: AD
AD is the answer.
https://cloud.google.com/build/docs/building/build-containers#store-images
upvoted 2 times
Selected Answer: AD
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 188/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
AD are correct
upvoted 2 times
https://cloud.google.com/build/docs/building/build-containers
upvoted 2 times
agree with AD
upvoted 2 times
Selected Answer: AD
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 189/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an internal application that will allow employees to organize community events within your company. You deployed your
application on a single Compute Engine instance. Your company uses Google Workspace (formerly G Suite), and you need to ensure that the
company employees can authenticate to the application from anywhere. What should you do?
A. Add a public IP address to your instance, and restrict access to the instance using firewall rules. Allow your company's proxy as the only
source IP address.
B. Add an HTTP(S) load balancer in front of the instance, and set up Identity-Aware Proxy (IAP). Configure the IAP settings to allow your
C. Set up a VPN tunnel between your company network and your instance's VPC location on Google Cloud. Configure the required firewall rules
and routing information to both the on-premises and Google Cloud networks.
D. Add a public IP address to your instance, and allow traffic from the internet. Generate a random hash, and create a subdomain that includes
this hash and points to your instance. Distribute this DNS address to your company's employees.
Correct Answer: C
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
B is the answer.
https://cloud.google.com/iap/docs/concepts-overview#how_iap_works
When an application or resource is protected by IAP, it can only be accessed through the proxy by principals, also known as users, who have the
correct Identity and Access Management (IAM) role. When you grant a user access to an application or resource by IAP, they're subject to the fine-
grained access controls implemented by the product in use without requiring a VPN. When a user tries to access an IAP-secured resource, IAP
performs authentication and authorization checks.
upvoted 3 times
B is correct
upvoted 2 times
B is more suitable answer, because the question states that the employee should access the application from everywhere, IAP will allow your access
depending on your workspace credentials.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 190/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your development team is using Cloud Build to promote a Node.js application built on App Engine from your staging environment to production.
The application relies on several directories of photos stored in a Cloud Storage bucket named webphotos-staging in the staging environment.
After the promotion, these photos must be available in a Cloud Storage bucket named webphotos-prod in the production environment. You want to
B. Add a startup script in the application's app.yami file to move the photos from webphotos-staging to webphotos-prod.
C. Add a build step in the cloudbuild.yaml file before the promotion step with the arguments:
D. Add a build step in the cloudbuild.yaml file before the promotion step with the arguments:
Correct Answer: C
You should add a build step in the cloudbuild.yaml file before the promotion step with the arguments shown above. This build step will use the
gsutil tool to copy the photos from the webphotos-staging bucket to the webphotos-prod bucket. The -r flag tells gsutil to copy all files in the
bucket recursively, and the waitFor parameter tells Cloud Build to wait for this step to complete before continuing with the promotion step.
upvoted 3 times
Selected Answer: C
C is the answer.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 2 times
Selected Answer: C
https://cloud.google.com/storage/docs/gsutil/commands/cp
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 191/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a web application that will be accessible over both HTTP and HTTPS and will run on Compute Engine instances. On occasion,
you will need to SSH from your remote laptop into one of the Compute Engine instances to conduct maintenance on the app. How should you
A. Set up a backend with Compute Engine web server instances with a private IP address behind a TCP proxy load balancer.
B. Configure the firewall rules to allow all ingress traffic to connect to the Compute Engine web servers, with each server having a unique
external IP address.
C. Configure Cloud Identity-Aware Proxy API for SSH access. Then configure the Compute Engine servers with private IP addresses behind an
D. Set up a backend with Compute Engine web server instances with a private IP address behind an HTTP(S) load balancer. Set up a bastion
host with a public IP address and open firewall ports. Connect to the web instances using the bastion host.
Correct Answer: C
Reference:
https://cloud.google.com/compute/docs/instances/connecting-advanced#cloud_iap
Selected Answer: D
C is correct
upvoted 1 times
If you need to connect to a VM that doesn't have external IP addresses and you can't use IAP, review the other methods listed in Connection
options for internal-only VMs.
upvoted 1 times
Selected Answer: C
The web server instances are only accessible through the load balancer and not directly via their private IP addresses, which improves security.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 192/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
The bastion host acts as a secure jump box that allows you to SSH into the web server instances, while only allowing incoming SSH connections on
a specific IP address (the bastion host's public IP).
The firewall rules on the web server instances can be configured to only allow connections from the bastion host's IP, further reducing the attack
surface.
It is a more recommended to have a bastion host that is authorized by your organization to connect to private instances this way it can provide a
better security to your instances. And also in terms of compliance, it will also follow the best practices of your organization.
upvoted 3 times
Cloud IAP allows you to control access to resources in your project by using identity and access management (IAM) roles, which is a good way
to secure SSH access. However, this option does not address the issue of securing incoming web traffic, which is a separate concern.
Configuring the servers with private IP addresses behind an HTTP(s) load balancer would help with securing the web traffic, but it does not
provide an additional layer of security for SSH access. Additionally, it does not have the concept of secure jump host, which is a security best
practice in protecting your instances from unwanted incoming connections.
upvoted 3 times
Selected Answer: C
C is the answer.
https://cloud.google.com/iap
upvoted 1 times
Selected Answer: C
https://cloud.google.com/solutions/connecting-securely#storing_host_keys_by_enabling_guest_attributes
Answer C
upvoted 2 times
C is correct
upvoted 2 times
Selected Answer: C
With TCP forwarding, IAP can protect SSH and RDP access to your VMs hosted on Google Cloud. Your VM instances don't even need public IP
addresses.
https://cloud.google.com/iap
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 193/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have a mixture of packaged and internally developed applications hosted on a Compute Engine instance that is running Linux. These
applications write log records as text in local files. You want the logs to be written to Cloud Logging. What should you do?
D. Using cron, schedule a job to copy the log files to Cloud Storage once a day.
Correct Answer: B
Reference:
https://cloud.google.com/logging/docs/agent/logging/configuration
Agree with B
upvoted 7 times
Selected Answer: B
Selected Answer: B
B is the answer.
https://cloud.google.com/stackdriver/docs/solutions/agents/ops-agent
The Ops Agent is the primary agent for collecting telemetry from your Compute Engine instances. Combining logging and metrics into a single
agent, the Ops Agent uses Fluent Bit for logs, which supports high-throughput logging, and the OpenTelemetry Collector for metrics.
You can configure the Ops Agent to support parsing of log files from third-party applications.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 194/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You want to create `fully baked` or `golden` Compute Engine images for your application. You need to bootstrap your application to connect to the
appropriate database according to the environment the application is running on (test, staging, production). What should you do?
A. Embed the appropriate database connection string in the image. Create a different image for each environment.
B. When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the
Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.
C. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database
connection string. In your application, read the ג€DATABASEג€ environment variable, and use the value to connect to the appropriate
database.
D. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database
connection string. In your application, query the metadata server for the ג€DATABASEג€ value, and use the value to connect to the appropriate
database.
Correct Answer: C
I vote D.
https://cloud.google.com/compute/docs/metadata/querying-metadata
upvoted 6 times
This approach allows you to create a single golden image that is agnostic to the environment it is running in, while still allowing the appropriate
database connection to be set at runtime. The metadata item is stored with the instance, so it can be read by your application at any time. This
method allows you to avoid creating different images for different environments, and to use the same image for all environments.
You can create metadata item by using the gcloud command line tool or the API to set metadata for a Compute Engine instance. Once set, the
metadata can be easily accessed by your application via the instance metadata server.
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/compute/docs/metadata/setting-custom-metadata
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
Selected Answer: D
It should be D!
upvoted 1 times
https://cloud.google.com/compute/docs/metadata/overview
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 195/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 196/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read
and write to a
Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to
A. Configure the appropriate service accounts, and use Workload Identity to run the pods.
B. Store the application credentials as Kubernetes Secrets, and expose them as environment variables.
C. Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database.
D. Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.
Correct Answer: B
Reference:
https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
Selected Answer: A
https://cloud.google.com/blog/products/containers-kubernetes/introducing-workload-identity-better-authentication-for-your-gke-applications
A Cloud IAM service account is an identity that an application can use to make requests to Google APIs. As an application developer, you could
generate individual IAM service accounts for each application, and then download and store the keys as a Kubernetes secret that you manually
rotate. Not only is this process burdensome, but service account keys only expire every 10 years (or until you manually rotate them). In the case of
a breach or compromise, an unaccounted-for key could mean prolonged access for an attacker. This potential blind spot, plus the management
overhead of key inventory and rotation, makes using service account keys as secrets a less than ideal method for authenticating GKE workloads.
upvoted 7 times
I assume that nobody read through the official docs and GCP Best practices for K8s and Cloud SQL.
https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#secrets
"A database credentials Secret includes the name of the database user you are connecting as, and the user's database password."
The best answer here is B, having K8s Secrets is the go-to method to configure and store sensitive information within a cluster such as Spanner
credentials
upvoted 5 times
Selected Answer: A
Option A is the recommended approach for securely configuring your microservice-based application to retrieve Spanner credentials on Google
Kubernetes Engine (GKE)
upvoted 1 times
Selected Answer: A
This approach involves configuring service accounts with the necessary permissions to access the Spanner database. By using Workload Identity,
you can associate these service accounts with your Kubernetes Engine pods, allowing them to authenticate and retrieve Spanner credentials
automatically.
upvoted 1 times
Selected Answer: B
i go for B
question is about how to RETRIEVE db creds
https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#secrets
A is about how to connect to spanner
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 197/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
Google recommends using service accounts and work load identity whenever possible
upvoted 2 times
Workload Identity is a way to associate Kubernetes service accounts with Google Cloud IAM service accounts, allowing your pods to authenticate to
Google Cloud services using their IAM identity. This means that you don't have to store application credentials in your code or in Kubernetes
Secrets, and you can manage the permissions of your application in Google Cloud IAM.
You would need to create service account in cloud IAM and a Kubernetes service account and then map them to use Workload Identity.
You can also use gcloud command line to map the Kubernetes service account to the desired IAM service account. Then in your application, you
can use the Kubernetes service account to authenticate to Spanner, which will authenticate as the mapped IAM service account.
This way you don't have to hardcode credentials in your code, and you can easily manage the permissions of your application using Google Cloud
IAM.
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity#what_is
Applications running on GKE might need access to Google Cloud APIs such as Compute Engine API, BigQuery Storage API, or Machine Learning
APIs.
Workload Identity allows a Kubernetes service account in your GKE cluster to act as an IAM service account. Pods that use the configured
Kubernetes service account automatically authenticate as the IAM service account when accessing Google Cloud APIs. Using Workload Identity
allows you to assign distinct, fine-grained identities and authorization for each application in your cluster.
upvoted 1 times
Selected Answer: A
Answer is A
Store the application credentials as Kubernetes Secrets, and expose them as environment variables
upvoted 2 times
Secrets are similar to ConfigMaps but are specifically intended to hold confidential data.
Caution:
Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve
or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use
that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.
upvoted 1 times
Selected Answer: A
I think A is correct
upvoted 4 times
Selected Answer: A
A and B ,both are correct. Curently in my project we are using A for allowing pods to query Bigquery. So A and B both seems to be correct.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 198/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: B
B.
The question is not about how to connect/access Cloud Spanner, but is how to "retrieve Spanner *credentials*".
upvoted 3 times
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 199/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are deploying your application on a Compute Engine instance that communicates with Cloud SQL. You will use Cloud SQL Proxy to allow your
application to communicate to the database using the service account associated with the application's instance. You want to follow the Google-
recommended best practice of providing minimum access for the role assigned to the service account. What should you do?
Correct Answer: C
Reference:
https://cloud.google.com/sql/docs/mysql/sql-proxy
I vote C
https://cloud.google.com/sql/docs/mysql/roles-and-permissions
upvoted 6 times
Selected Answer: C
Cloud SQL Client role: This role provides the necessary permissions to interact with Cloud SQL while minimizing access to other resources.
upvoted 1 times
The Cloud SQL Client role has the minimal set of permissions required to access Cloud SQL instances. This role includes permissions to connect to
and use a Cloud SQL instance, but it doesn't include permissions to create, delete or manage the instance itself. This role should be granted to the
service account associated with your Compute Engine instance, in order to allow your application to connect to the Cloud SQL instance using the
Cloud SQL Proxy.
You can assign the Cloud SQL Client role to a service account by using the Cloud Console, the gcloud command-line tool, or the Cloud Identity and
Access Management (IAM) API. Once the role is assigned, your application will be able to authenticate to Cloud SQL using the service account and
the Cloud SQL Proxy.
It is important to note that the permissions granted by this role should be limited to the specific Cloud SQL instance that the application needs to
connect to and not the entire project, to minimize the access and follow the principle of least privilege.
upvoted 1 times
C is the answer.
https://cloud.google.com/sql/docs/mysql/roles-and-permissions#proxy-roles-permissions
If you are connecting to a Cloud SQL instance from a Compute Engine instance using Cloud SQL Auth proxy, you can use the default Compute
Engine service account associated with the Compute Engine instance.
As with all accounts connecting to a Cloud SQL instance, the service account must have the Cloud SQL > Client role.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 200/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team develops stateless services that run on Google Kubernetes Engine (GKE). You need to deploy a new service that will only be accessed
by other services running in the GKE cluster. The service will need to scale as quickly as possible to respond to changing load. What should you
do?
A. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
B. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
C. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
D. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
Correct Answer: C
Selected Answer: C
Horizontal Pod Autoscaler (HPA) scales the number of pod replicas based on CPU usage or other select metrics, which is suitable for quick scaling
with load changes. ClusterIP is appropriate for services only accessible within the cluster. This combination seems to meet all the requirements.
upvoted 1 times
HPA automatically scales the number of pods in a deployment based on CPU utilization or other custom metrics. By using HPA, you can ensure that
your service scales quickly to respond to changing load while minimizing manual intervention. Exposing the service via a ClusterIP Service allows
other services running in the GKE cluster to access it securely without exposing it to the public internet.
upvoted 1 times
When dealing with services that are only accessed by other services in the same GKE cluster, it's usually best to use a ClusterIP Service. This type of
service allows pods to be accessed by other pods within the cluster using their IP address, but doesn't expose them to the outside world.
upvoted 1 times
You can expose the service via a ClusterIP Service by creating one in Kubernetes and configuring the selector to match the replicas running in
your deployment. This allows other services to discover and communicate with your new service by its ClusterIP.
upvoted 1 times
Selected Answer: C
C is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/service#services_of_type_clusterip
When you create a Service of type ClusterIP, Kubernetes creates a stable IP address that is accessible from nodes in the cluster.
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in
response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics
from sources outside of your cluster.
upvoted 1 times
Selected Answer: C
C is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 201/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Agree Option C
https://cloud.google.com/kubernetes-engine/docs/concepts/service
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 202/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently migrated a monolithic application to Google Cloud by breaking it down into microservices. One of the microservices is deployed
using Cloud
Functions. As you modernize the application, you make a change to the API of the service that is backward-incompatible. You need to support
both existing callers who use the original API and new callers who use the new API. What should you do?
A. Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use a load balancer to distribute calls
B. Leave the original Cloud Function as-is and deploy a second Cloud Function that includes only the changed API. Calls are automatically
C. Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use Cloud Endpoints to provide an API
D. Re-deploy the Cloud Function after making code changes to support the new API. Requests for both versions of the API are fulfilled based
Correct Answer: C
Reference:
https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions
Selected Answer: C
Based on the link … where it says for backward incompatible strategy use two separate deployments/instances v1 and v2 and only C option is inline
with the link
upvoted 7 times
When making backward-incompatible changes to an API, it's important to provide a way for existing callers to continue using the old API while still
supporting new callers who use the new API. One way to do this is by deploying a new version of the Cloud Function that includes the new API,
and leaving the old function as-is.
By using Cloud Endpoints you can create an API Gateway that can handle multiple versions of the API, so that requests to different versions of the
API can be routed to the corresponding Cloud Function. This allows you to maintain both versions of the API and have control over which version is
exposed to the users.
This approach allows you to continue supporting existing callers while also introducing new features to the application through the new version.
Also, it gives you a lot more flexibility in terms of rollout, testing, and monitoring.
upvoted 1 times
Selected Answer: C
C is the answer.
https://cloud.google.com/endpoints/docs/openapi/versioning-an-api#backwards-incompatible
When you make changes to your API that breaks your customers' client code, as a best practice, increment the major version number of your API.
Endpoints can run more than one major version of an API concurrently. By providing both versions of the API, your customers can pick which
version they want to use and control when they migrate to the new version.
upvoted 1 times
Selected Answer: C
https://cloud.google.com/architecture/migrating-a-monolithic-app-to-microservices-gke#versioning
Answer C
upvoted 2 times
https://cloud.google.com/endpoints/docs/openapi/versioning-an-api#backwards-incompatible
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 203/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: C
C is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 204/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application that will allow users to read and post comments on news articles. You want to configure your application to
store and display user-submitted comments using Firestore. How should you design the schema to support an unknown number of comments and
articles?
C. Store each comment in a document, and add the comment's key to an array property on the article.
D. Store each comment in a document, and add the comment's key to an array property on the user profile.
Correct Answer: D
Selected: A
Firestore has a “hierarchical structure”: collection contains documents, document can contain (sub)collections
D does not make sense bc why do you want to link comments to the user profile instead of the article?
https://stackoverflow.com/questions/48634227/limitation-to-number-of-documents-under-one-collection-in-firebase-firestore
“There is no documented limit to the number of documents that can be stored in a Cloud Firestore collection. The system is designed to scale to
huge data sets.”
upvoted 20 times
"Document size - Cloud Firestore is optimized for small documents and enforces a 1MB size limit on documents. If your array can expand
arbitrarily, it is better to use a subcollection, which has better scaling performance."
Check out also the example in the subcollections documentation, showing a rooms-messages hierarchy example.
https://firebase.google.com/docs/firestore/data-model#subcollections
upvoted 6 times
Why D and not C? For my understanding, we need to keep a relation between the articles and their comments. I don't see how the user profile
could come in handy... but please let me know if I misunderstood something. For me, Ans C makes more sense.
upvoted 9 times
Selected Answer: A
Answer is A
upvoted 2 times
As per previous comments also appointed, there is not such a limitation on size for a Subcollection, and it does not make sense to store the
relation with User profile like answer D
upvoted 2 times
Selected Answer: A
Option A is the recommended approach for structuring data in Firestore to support an unknown number of comments and articles. Firestore is a
NoSQL document-oriented database, and using subcollections provides a flexible and scalable way to organize related data
upvoted 1 times
I would go with C.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 205/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
It is recommended to add the comment document IDs to an array property on the corresponding article document, rather than on a user profile.
This approach allows you to easily retrieve all comments for a specific article by querying the comments collection using the article ID and then
filtering the results based on the IDs in the article's comments array.
Storing the comment IDs in the article document also avoids the need to make multiple read operations to retrieve the comments for a given
article, which can be slow and increase latency.
For example, you could create an array property named "comments" in the article document and add the comment document IDs to this array
every time a user submits a new comment for the article. This allows you to efficiently retrieve all comments for a given article by querying the
comments collection and filtering based on the IDs in the article's "comments" array.
upvoted 1 times
However, this approach would make it more difficult to scale the data as the number of comments grows, because it would require you to
retrieve all the comment keys in the array of the article and then perform additional queries to retrieve the actual comment information one
by one. This could slow down the application as the number of comments increase, and make it more difficult to handle high-load situations.
upvoted 1 times
D is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 206/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
D is correct I think
upvoted 2 times
The unknown number of articles and comments can exceeds the 1 mb limit for document... I vote A
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 207/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently developed an application. You need to call the Cloud Storage API from a Compute
Engine instance that doesn't have a public IP address. What should you do?
Correct Answer: C
Reference:
https://cloud.google.com/compute/docs/ip-addresses
Selected Answer: D
Private Google Access allows your Compute Engine instances to access Google Cloud APIs and services without requiring a public IP address. It
enables outbound connectivity to Google APIs and services using internal IP addresses.
upvoted 1 times
A is not correct because Carrier Peering enables you to access Google applications, such as Google Workspace, by using a service provider to
obtain enterprise-grade network services that connect your infrastructure to Google.
B is not correct because VPC Network Peering enables you to peer VPC networks so that workloads in different VPC networks can communicate in
a private RFC 1918 space. Traffic stays within Google's network and doesn't traverse the public internet.
C is not correct because Shared VPC allows an organization to connect resources from multiple projects to a common VPC network so that they
can communicate with each other securely and efficiently using internal IPs from that network.
D is correct because Private Google Access is an option available for each subnetwork. When it is enabled, instances in the subnetwork can
communicate with public Google API endpoints even if the instances don't have external IP addresses.
upvoted 4 times
Private Google Access is a feature that enables access to Google Cloud APIs and services for instances that don't have a public IP address. With this
feature, you can allow your Compute Engine instances in a VPC network to access Google services over the private IP addresses, without the need
for a NAT gateway or VPN.
This feature is especially useful when you want to access Google APIs and services from an instance that doesn't have internet access or a public IP
address. In this case, you can enable Private Google Access on the VPC network that your Compute Engine instances belong to, and they will be
able to call the Cloud Storage API using the private IP address.
To enable Private Google Access, you can use the gcloud command-line tool, the Cloud Console, or the REST API. This feature is also available for
other services like BigQuery and Cloud SQL as well, to access them from instances without a public IP address
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/vpc/docs/private-google-access
VM instances that only have internal IP addresses (no external IP addresses) can use Private Google Access. They can reach the external IP
addresses of Google APIs and services. The source IP address of the packet can be the primary internal IP address of the network interface or an
address in an alias IP range that is assigned to the interface. If you disable Private Google Access, the VM instances can no longer reach Google
APIs and services; they can only send traffic within the VPC network.
upvoted 1 times
D is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 208/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Yup, it's D.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 209/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer working with the CI/CD team to troubleshoot a new feature that your team introduced. The CI/CD team used HashiCorp
Packer to create a new Compute Engine image from your development branch. The image was successfully built, but is not booting up. You need
A. Create a new feature branch, and ask the build team to rebuild the image.
B. Shut down the deployed virtual machine, export the disk, and then mount the disk locally to access the boot logs.
C. Install Packer locally, build the Compute Engine image locally, and then run it in your personal Google Cloud project.
D. Check Compute Engine OS logs using the serial port, and check the Cloud Logging logs to confirm access to the serial port.
Correct Answer: C
Reference:
https://cloud.google.com/architecture/automated-build-images-with-jenkins-kubernetes
I vote D
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-using-serial-console
upvoted 9 times
Selected Answer: D
Selected Answer: D
Answer is D
If the Compute Engine image is not booting up, one of the first steps to troubleshoot the issue would be to check the OS logs to see what might
be causing the problem. Compute Engine provides access to the serial console logs of a virtual machine, which can be accessed through the Cloud
Console or the gcloud command-line tool. This will allow you to see the output of the virtual machine's boot process and identify any errors or
issues that might be preventing it from starting up.
Additionally, you should also check the Cloud Logging logs to confirm that you have access to the serial port. It may be possible that the firewall
rules or IAM permissions are blocking access to the serial port and causing the image not to boot. So, you should check the logs for any errors
related to access or firewall rules.
By checking the OS logs and the Cloud Logging logs, you and the CI/CD team can get a better understanding of what might be causing the issue
and take steps to fix it.
upvoted 2 times
D is the answer.
https://cloud.google.com/compute/docs/troubleshooting/vm-startup#identify_the_reason_why_the_boot_disk_isnt_booting
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 210/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
D is correct
upvoted 3 times
Selected Answer: D
D is more suitable
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 211/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You manage an application that runs in a Compute Engine instance. You also have multiple backend services executing in stand-alone Docker
containers running in Compute Engine instances. The Compute Engine instances supporting the backend services are scaled by managed
instance groups in multiple regions. You want your calling application to be loosely coupled. You need to be able to invoke distinct service
implementations that are chosen based on the value of an HTTP header found in the request. Which Google Cloud feature should you use to
A. Traffic Director
B. Service Directory
Correct Answer: D
"the backend services are scaled by managed instance groups in multiple regions", the Internal load balancer is a regional service, so It's A in my
opinion.
- "An internal HTTP(S) load balancer routes internal traffic to the service running on the VM. Traffic Director works with Cloud Load Balancing to
provide a managed ingress experience. You set up an external or internal load balancer, and then configure that load balancer to send traffic to
your microservices."
upvoted 9 times
Selected Answer: C
Anthos Service Mesh is a service mesh solution that can be used to invoke distinct service implementations based on the value of an HTTP header
in the request. It provides a platform-agnostic way to connect, manage, and secure microservices running on Google Cloud or other environments
upvoted 1 times
Traffic Director is a Google Cloud feature that provides a global traffic management control plane for service mesh architectures. It allows you to
configure and manage traffic routing across multiple services and environments.
upvoted 1 times
Selected Answer: A
A as its the only one that let's you router based on headers
upvoted 1 times
Selected Answer: A
https://cloud.google.com/traffic-director/docs/overview#traffic_management
Advanced traffic management, including routing and request manipulation (based on hostname, path, headers, cookies, and more), enables you to
determine how traffic flows between your services. You can also apply actions like retries, redirects, and weight-based traffic splitting for canary
deployments. Advanced patterns like fault injection, traffic mirroring, and outlier detection enable DevOps use cases that improve your resiliency.
upvoted 1 times
Selected Answer: A
A: https://cloud.google.com/traffic-director/docs/set-up-gce-vms
upvoted 1 times
Traffic Director provides global traffic management for service meshes and hybrid deployments. It allows you to configure routing rules based on
the values of HTTP headers, so you can direct traffic to different service implementations based on the value of an HTTP header found in the
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 212/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
request. With Traffic Director, you can route traffic to different services running in different regions, and it also supports automatic failover, so you
can ensure high availability for your backend services.
In your case, you can configure Traffic Director to inspect the value of an HTTP header in the request, and then route the traffic to the appropriate
service implementation running in different regions. This allows your application to invoke the backend services in a loosely coupled way, and
ensures that the backend services can scale independently of the calling application.
upvoted 1 times
However, in this scenario where you want to invoke distinct service implementations that are chosen based on the value of an HTTP header
found in the request. The Internal HTTP(S) Load Balancer doesn't offer this type of feature. It typically directs traffic based on the IP or hostname
and port, but it's not capable of inspecting the value of an HTTP header like Traffic Director does.
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/load-balancing/docs/l7-internal/traffic-management
Internal HTTP(S) Load Balancing supports advanced traffic management functionality that enables you to use the following features:
- Traffic steering. Intelligently route traffic based on HTTP(S) parameters (for example, host, path, headers, and other request parameters).
upvoted 1 times
Selected Answer: A
I think A is correct
upvoted 2 times
Selected Answer: A
I believe, it should be A.
upvoted 3 times
Also, in the above question, there's a statement specifying the need for the application to be loosely coupled. To support this point, I found out
one line on the Traffic Director's documentation which goes like this, 'This separation of application logic from networking logic lets you
improve your development velocity, increase service availability, and introduce modern DevOps practices to your organization.'
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 213/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 214/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is developing an ecommerce platform for your company. Users will log in to the website and add items to their shopping cart. Users will
be automatically logged out after 30 minutes of inactivity. When users log back in, their shopping cart should be saved. How should you store
users' session and shopping cart information while following Google-recommended best practices?
A. Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL.
B. Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID.
C. Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances.
D. Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in
Firestore.
Correct Answer: A
Selected Answer: D
Should be D definitely
upvoted 9 times
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
A is not correct because local memory is lost on process termination, so you would lose the cart information.
B is not correct because accessing a Cloud Storage bucket is slow and expensive for session information. This is not a Google Cloud best practice.
C is not correct because BigQuery wouldn't be able to handle the frequent updates made to carts and sessions.
D is correct because Memorystore is fast and a standard solution to store session information, and Firestore is ideal for small structured data such
as a shopping cart. The user will be mapped to the shopping cart with a new session, if required.
upvoted 1 times
Selected Answer: D
Answer is D
When storing session and shopping cart information for an ecommerce platform, it's important to consider scalability, reliability, and security. One
solution that follows Google-recommended best practices would be to use Memorystore for Redis or Memorystore for Memcached to store
session information and Firestore to store shopping cart information.
Memorystore can store session information and easily handle a large number of concurrent connections, which is crucial for an ecommerce
platform where users are logged in and adding items to their shopping cart frequently.
Firestore can easily handle large amounts of semi-structured data, such as a shopping cart's item. Firestore is also a scalable and reliable solution,
and it supports automatic scaling and replication.
By separating the session information and shopping cart information into different services, you can also increase security and avoid any potential
data breaches. Using different services will also allows you to scale them independently.
upvoted 1 times
Selected Answer: D
D is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 215/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
D is correct
upvoted 3 times
Selected Answer: D
Agree with D
upvoted 1 times
Selected Answer: D
Vote D
upvoted 1 times
vote D
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 216/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing a resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster. You need to ensure
that all applications can access the resources needed to run. What should you do? (Choose two.)
B. Create a namespace for each team, and attach resource quotas to each namespace.
C. Create a LimitRange to specify the default compute resource requirements for each namespace.
D. Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace.
E. Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for
namespaces.
Correct Answer: AB
I vote B, C
https://kubernetes.io/docs/concepts/policy/resource-quotas/
https://kubernetes.io/docs/concepts/policy/limit-range/
upvoted 13 times
Selected Answer: AC
When it comes to Google's recommended best practices for Kubernetes, especially in the context of Google Kubernetes Engine (GKE), the
emphasis is generally placed on setting specific resource requests and limits for each pod and container (Option A). This approach aligns with
Kubernetes best practices, as it ensures efficient and reliable operation of applications by maximizing infrastructure utilization and guaranteeing
smooth application performance.
This granular level of configuration, where resource requests and limits are explicitly set for each workload, is key to operating applications as
efficiently and reliably as possible in Kubernetes clusters. It allows for the classification of pods into different Quality of Service (QoS) classes, such
as 'Guaranteed' and 'Burstable', which further aids in resource management and scheduling decisions.
upvoted 1 times
Specify the resource limits and requests in the object specifications. This will ensure that each application is allocated the resources it needs to run,
and that no application can consume more resources than it was allocated.
Create a namespace for each team, and attach resource quotas to each namespace. This will allow you to isolate each team's applications from
each other, and to ensure that each team's applications are not consuming more resources than they were allocated.
upvoted 1 times
Selected Answer: BC
In the context of the problem statement, B and C are appropriate solution for ensuring that all applications can access the resources needed to run:
B. Create a namespace for each team, and attach resource quotas to each namespace. This way, you can set limits on the resources that a team can
consume, so that one team does not consume all the resources of the cluster, and that resources are shared among all teams in a fair way.
C. Create a LimitRange to specify the default compute resource requirements for each namespace. LimitRanges allow you to set default limits and
requests for all the pods in a specific namespace, it also ensure that pods in that namespace can never consume more resources than the
LimitRange defined.
You can use a combination of resource limits, quotas, and limit ranges to prevent a single team or application from consuming too many resources,
as well as to ensure that all teams and applications have access to the resources they need to run.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 217/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Option A: Specify the resource limits and requests in the object specifications, is a valid method for controlling the resources that a pod or
container needs, but it may not be sufficient by itself to fully manage the resources in a multi-tenant cluster where multiple teams and
applications need to share resources.
When you set resource limits and requests on the pod or container level, you have a fine-grained control over the resources that a specific pod
or container needs, but it doesn't provide a way to set limits or quotas on the level of a whole team or namespace. It also doesn't provide a
default configuration for all pods created in a namespace.
By itself, this method does not give you the visibility and control you need over the overall resource usage across multiple teams and
applications. With creating a namespace per team and attaching quotas, you can limit the resources each team can use, and with LimitRange
you can ensure that no pod created in the namespace can go beyond specific limits.
upvoted 1 times
Selected Answer: BC
BC is the answer.
https://kubernetes.io/docs/concepts/policy/resource-quotas/
A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit
the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by
resources in that namespace.
https://kubernetes.io/docs/concepts/policy/limit-range/
A LimitRange is a policy to constrain the resource allocations (limits and requests) that you can specify for each applicable object kind (such as Pod
or PersistentVolumeClaim) in a namespace.
upvoted 2 times
Selected Answer: BC
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits
Ans B,C
upvoted 3 times
BC are correct
upvoted 4 times
A&B as obvious !
upvoted 3 times
Selected Answer: BC
Selected Answer: BC
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 218/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a new application that has the following design requirements:
✑ Creation and changes to the application infrastructure are versioned and auditable.
✑ The application and deployment infrastructure uses Google-managed services as much as possible.
✑ The application runs on a serverless compute platform.
How should you design the application's architecture?
A. 1. Store the application and infrastructure source code in a Git repository. 2. Use Cloud Build to deploy the application infrastructure with
B. 1. Deploy Jenkins from the Google Cloud Marketplace, and define a continuous integration pipeline in Jenkins. 2. Configure a pipeline step
to pull the application source code from a Git repository. 3. Deploy the application source code to App Engine as a pipeline step.
C. 1. Create a continuous integration pipeline on Cloud Build, and configure the pipeline to deploy the application infrastructure using
Deployment Manager templates. 2. Configure a pipeline step to create a container with the latest application source code. 3. Deploy the
D. 1. Deploy the application infrastructure using gcloud commands. 2. Use Cloud Build to define a continuous integration pipeline for changes
to the application source code. 3. Configure a pipeline step to pull the application source code from a Git repository, and create a
containerized application. 4. Deploy the new container on Cloud Run as a pipeline step.
Correct Answer: D
Reference:
https://cloud.google.com/docs/ci-cd
Selected Answer: A
B - use Jenkins as the deployment tool instead of Cloud Build (The application and deployment infrastructure uses Google-managed services as
much as possible).
C - uses Compute Engine to run containers. CE is not serverless.
D - we can't version gcloud commands
upvoted 6 times
D is a far better fit here and nobody is talking about "versioning gcloud commands" - Cloud Run has revisions (=versions), which meets the
task's criteria.
upvoted 1 times
Selected Answer: D
I vote D:
- gcloud, Cloud Build, Cloud Run - are Google-managed services
- Cloud Run has revisions (=versions)
- Cloud Run is serverless
A is wrong, as Cloud Functions are intended for single-purpose functions - not an entire app.
upvoted 1 times
A is correct
upvoted 1 times
My answer is A:
Version and auditable: GIT
GCP managed deployment infrastructure: Cloud build, Cloud Deployment Manager, Terraform
Serverless: Cloud Functions
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 219/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: A
What put me off A is that at the end there is deploy to ` Cloud Function` and it should be all serverless applications and not just a cloud function,
that is what Cloud Run should do.
upvoted 2 times
Selected Answer: D
Option D is the best fit for designing the architecture of the new application as it satisfies all the design requirements of versioning and auditing
the infrastructure changes, using Google-managed services and deploying the application on a serverless compute platform. The approach
includes:
- By using Terraform, which is a third-party infrastructure as code tool, it is not a Google-managed service and it may not have the same level of
integration as Google-managed services.
- Cloud Functions are a serverless compute platform, but it's mainly used to run event-driven, short-lived functions, while it's not a suitable
choice for running long running processes, web servers and so on.
upvoted 2 times
Selected Answer: A
A is the answer.
upvoted 1 times
Selected Answer: A
A is correct
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 220/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 4 times
So in A and D option -
option A is not right becoz we can deploy on cloud function not suitable as serverless compute
So i think Answer is D
upvoted 1 times
Selected Answer: A
Vote A
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 221/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are creating and running containers across different projects in Google Cloud. The application you are developing needs to access Google
Cloud services from within Google Kubernetes Engine (GKE). What should you do?
B. Use a Google service account to run the Pod with Workload Identity.
D. Use a Google service account with GKE role-based access control (RBAC).
Correct Answer: A
Option B
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
upvoted 7 times
Selected Answer: B
The best way to access Google Cloud services from within Google Kubernetes Engine (GKE) is to use a Google service account to run the Pod with
Workload Identity.
Workload Identity allows your pods to authenticate to Google Cloud services using their Kubernetes service account credentials, without you
having to expose any sensitive credentials in your code.
upvoted 1 times
Selected Answer: B
Application images runs as a container within a POD as a process. So Pod should be identified as a principle here and it should have a service
account to access other services within GKE cluster.
upvoted 1 times
Selected Answer: B
In summary, using Workload Identity allows you to authenticate your application to Google Cloud services using the same identity that runs the
application, this makes it simple to manage the access and permissions to resources, and also ensures that your application only has the necessary
permissions to access the services.
upvoted 2 times
Workload Identity allows you to authenticate to Google Cloud services using the same identity that runs your application, instead of creating and
managing a separate service account. This simplifies the process of granting permissions to your application, and ensures that it only has the
necessary access to resources.
When you assign a Google service account to GKE nodes (Option A), it can be difficult to manage the permissions needed by the application and
also could be a security issue since it grants access to all the services that the service account has permissions to.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 222/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
B is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity#what_is
Applications running on GKE might need access to Google Cloud APIs such as Compute Engine API, BigQuery Storage API, or Machine Learning
APIs.
Workload Identity allows a Kubernetes service account in your GKE cluster to act as an IAM service account. Pods that use the configured
Kubernetes service account automatically authenticate as the IAM service account when accessing Google Cloud APIs. Using Workload Identity
allows you to assign distinct, fine-grained identities and authorization for each application in your cluster.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 2 times
Vote B
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 223/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google
Kubernetes Engine
(GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do?
A. Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script.
B. Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an
ENTRYPOINT script.
C. Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and
D. Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and
Correct Answer: D
Reference:
https://cloud.google.com/compute/docs/instances/startup-scripts/linux
It's not necessary to mount NFS to each node in GKE. Just create PVC point to shared NFS, mount to container, and use configuration in
ENTRYPOINT. Vote B
upvoted 8 times
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
B is more formal and standardized way to mount NFS onto the worker node compared to A where it asks us to create a startup script to mount the
volume.
upvoted 1 times
With PersisentVolumeClaim object, we can claim the volume what we need dynamically. The storage class will be defined by network administrator.
Container/Pod needs to wait until it reads configuration from the mounted volume before serving traffic to its clients.
upvoted 1 times
Selected Answer: B
Selected Answer: B
Option B: allows the application to be stateless and have no dependencies on the filesystem of the host.
D: is a good solution since it allows the application to access its configuration as soon as the application starts, without having to copy the
configuration files into the container.
But the best option is B, because it allows the application to be stateless and have no dependencies on the filesystem of the host. This approach is
more flexible, makes it easy to update the configuration files, and reduces the size of the container image.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 224/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: B
https://cloud.google.com/filestore/docs/accessing-fileshares
https://cloud.google.com/storage/docs/gcs-fuse
upvoted 1 times
Selected Answer: B
B is the answer.
upvoted 1 times
B is correct
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 225/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is developing a new application using a PostgreSQL database and Cloud Run. You are responsible for ensuring that all traffic is kept
private on Google
Cloud. You want to use managed services and follow Google-recommended best practices. What should you do?
A. 1. Enable Cloud SQL and Cloud Run in the same project. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3.
Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to Cloud SQL.
B. 1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud Run in the same project. 2. Configure a private IP
address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the
C. 1. Use Cloud SQL and Cloud Run in different projects. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3.
Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector
D. 1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different projects. 2. Configure a private IP address for the VM.
Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects.
Configure Cloud Run to use the connector to access the VM hosting PostgreSQL
Correct Answer: B
Selected Answer: A
I would go with A.
upvoted 1 times
Selected Answer: A
The key here is, Google Managed services and follow Google-recommended best practices: Definitely Cloud SQL instead of Postgres SQL that is
almost an unmanaged service managed by custom configurations set by customers.
upvoted 1 times
The answer would be A. By using Cloud SQL and Cloud Run in the same project, you can take advantage of the built-in security features and
managed services provided by Google Cloud. By configuring a private IP address for Cloud SQL and enabling private services access, you can
ensure that all traffic is kept private. You can also create a Serverless VPC Access connector and configure Cloud Run to use this connector to
connect to Cloud SQL. This configuration will allow your application to connect to the database securely and privately, following Google-
recommended best practices.
upvoted 1 times
A is the answer.
https://cloud.google.com/vpc/docs/serverless-vpc-access
Serverless VPC Access makes it possible for you to connect directly to your Virtual Private Cloud network from serverless environments such as
Cloud Run, App Engine, or Cloud Functions. Configuring Serverless VPC Access allows your serverless environment to send requests to your VPC
network using internal DNS and internal IP addresses (as defined by RFC 1918 and RFC 6598). The responses to these requests also use your
internal network.
upvoted 1 times
Selected Answer: A
https://cloud.google.com/sql/docs/postgres/connect-run#configure
https://cloud.google.com/sql/docs/postgres/connect-run#private-ip
Answer A
upvoted 1 times
Selected Answer: A
A is correct
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 226/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
Selected Answer: A
Options C and D are crossed out as they suggest using different projects.
To choose between option A and B, why should we install PostgreSQL explicitly, if it is already present in CloudSQL. So, I will go with CloudSQL whiz
Option A.
upvoted 1 times
Selected Answer: A
Vote A
upvoted 2 times
Selected Answer: A
Correct option is A
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 227/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application that will allow clients to download a file from your website for a specific period of time. How should you design
the application to complete this task while following Google-recommended best practices?
A. Configure the application to send the file to the client as an email attachment.
B. Generate and assign a Cloud Storage-signed URL for the file. Make the URL available for the client to download.
C. Create a temporary Cloud Storage bucket with time expiration specified, and give download permissions to the bucket. Copy the file, and
D. Generate the HTTP cookies with time expiration specified. If the time is valid, copy the file from the Cloud Storage bucket, and make the file
Correct Answer: B
Yes, I vote B
upvoted 5 times
Selected Answer: B
B is correct.
upvoted 1 times
B. Generate and assign a Cloud Storage-signed URL for the file. Make the URL available for the client to download.
The best approach is to use a Cloud Storage signed URL, which allows you to give time-limited read access to a specific file in your bucket. Once
the URL is generated, it can be shared with the client to download the file. This approach provides an easy way to control access to your files, and
allows you to revoke access at any time by simply invalidating the URL. It also ensures that the file is stored and served securely via Cloud Storage
and is durable, highly available and performant way to serve files.
upvoted 2 times
Selected Answer: B
B is the answer.
https://cloud.google.com/storage/docs/access-control/signed-urls
A signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query
string, allowing users without credentials to perform specific actions on a resource. When you generate a signed URL, you specify a user or service
account which must have sufficient permission to make the request that the signed URL will make. After you generate a signed URL, anyone who
possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time.
upvoted 1 times
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 228/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your development team has been asked to refactor an existing monolithic application into a set of composable microservices. Which design
aspects should you implement for the new application? (Choose two.)
A. Develop the microservice code in the same programming language used by the microservice caller.
B. Create an API contract agreement between the microservice implementation and microservice caller.
C. Require asynchronous communications between all microservice implementations and microservice callers.
D. Ensure that sufficient instances of the microservice are running to accommodate the performance requirements.
E. Implement a versioning scheme to permit future changes that could be incompatible with the current interface.
Correct Answer: B
Selected Answer: B
Vote BE
upvoted 7 times
Selected Answer: B
Selected Answer: E
Selected Answer: B
API contract design is the first step in design-first approach while creating APIs (REST). We can also follow code-first approach while providing
solution via public APIs. Design first approach is more flexible and provides sufficient amount of time for the dev team to gather requirements and
to understand what customer really wants.
upvoted 1 times
B. Guarantees that the two parties are communicating in a well-defined way, which makes the microservices more flexible, composable, and easy to
understand.
E. Allows to make changes to the service's API while still maintaining backward compatibility. With versioning, new and old consumers can continue
to use the service without interruption as new features are added.
On the other hand, developing the microservice code in the same programming language as the microservice caller does not promote loose
coupling, and it may also increase the complexity of the system as it will depend on language-specific features. Asynchronous communications are
also not always necessary and depend on the use case and requirement. Ensuring sufficient instances of the microservice are running can be done
by using a scalability strategy such as Auto-scaling, and this is not a specific design aspect.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 229/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
BE is the answer.
https://cloud.google.com/architecture/migrating-a-monolithic-app-to-microservices-gke#api_contracts
Each microservice should be invoked only from a set of interfaces. Each interface should in turn be clearly defined by a contract that can be
implemented using an API definition language like the OpenAPI Initiative specification or RAML. Having well-defined API contracts and interfaces
allows you to develop tests as a main component of your solution (for example, by applying test-driven development) against these API interfaces.
https://cloud.google.com/architecture/migrating-a-monolithic-app-to-microservices-gke#versioning
To give you flexibility in managing updates that might break existing clients, you should implement a versioning scheme for your microservices.
Versioning lets you deploy updated versions of a microservice without affecting the clients that are using an existing version.
upvoted 1 times
2. https://cloud.google.com/architecture/microservices-architecture-refactoring-monoliths#design_interservice_communication Ans C
Ans B,C
upvoted 1 times
BE are correct
upvoted 4 times
Selected Answer: B
upvoted 4 times
Selected Answer: E
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 231/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You deployed a new application to Google Kubernetes Engine and are experiencing some performance degradation. Your logs are being written to
Cloud
Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to correlate the metrics and data from the logs to
troubleshoot the performance issue and send real-time alerts while minimizing costs. What should you do?
A. Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API.
B. Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data
Studio.
C. Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send
D. Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based
metrics from the logs, and correlate them with the Prometheus data.
Correct Answer: D
Reference:
https://cloud.google.com/blog/products/operations/troubleshoot-gke-faster-with-monitoring-data-in-your-logs
Selected Answer: D
This option is the most cost-effective because it does not require you to export any data to Bigtable or BigQuery. It is also the most efficient option
because it allows you to correlate the metrics and logs in real time.
upvoted 1 times
Correlate Promethius metrics and Cloud logging logs: We need to compare these two logs. Promethius is an external metrics which can be a library
dependency used in the application. To compare Apple vs apple, we need to bring Prometheus metrics to GCP and should configure Cloud
monitoring to treat them as an external metric.
upvoted 1 times
This option allows you to use Cloud Monitoring to view the Prometheus metrics and create log-based metrics from the logs. This allows you to
correlate the metrics and logs in one place. By using Cloud Monitoring, you can also set up alerting rules and dashboards which can help you to
identify and troubleshoot the performance issues in real-time and with low costs.
It's not necessary to export the data to another storage to perform the correlation and to set up notifications, it can all be done directly in the
Cloud Monitoring, taking advantage of its features.
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus#viewing_metrics
upvoted 1 times
D is correc
upvoted 2 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 232/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: D
https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus#viewing_metrics
upvoted 1 times
Selected Answer: D
I agree with D as well … looking for minimizing the costs = use Cloud Monitoring which has alerting bult-in
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 233/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have been tasked with planning the migration of your company's application from on-premises to Google Cloud. Your company's monolithic
application is an ecommerce website. The application will be migrated to microservices deployed on Google Cloud in stages. The majority of your
company's revenue is generated through online sales, so it is important to minimize risk during the migration. You need to prioritize features and
A. Migrate the Product catalog, which has integrations to the frontend and product database.
B. Migrate Payment processing, which has integrations to the frontend, order database, and third-party payment vendor.
C. Migrate Order fulfillment, which has integrations to the order database, inventory system, and third-party shipping vendor.
D. Migrate the Shopping cart, which has integrations to the frontend, cart database, inventory system, and payment processing system.
Correct Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: D
https://cloud.google.com/architecture/migrating-a-monolithic-app-to-microservices-gke#example_migrating_a_shopping_cart
Based on the guide referenced, the answer would be D. Migrate the Shopping cart, which has integrations to the frontend, cart database, inventory
system, and payment processing system. The guide recommends migrating functionality with the least dependencies and level of complexity first,
which the shopping cart functionality has fewer dependencies and less complexity than the other options presented. This will minimize risk while
still providing value to the business and allowing further migration of more complex functionality.
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/architecture/migrating-a-monolithic-app-to-microservices-gke#choosing_an_initial_migration_effort
upvoted 1 times
"When you plan your migration, it's tempting to start with features that are trivial to migrate. This might represent a quick win, but might not be
the best learning experience for your team. Instead of going straight to the migration, you should spend time evaluating all of the features and
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 234/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
"According to this evaluation framework, the ideal candidate for the initial migration effort should be challenging enough to be meaningful, but
simple enough to minimize the risk of failure. The initial migration process should also:
Require little refactoring, considering both the feature itself and the related business processes.
Be stateless—that is, have no external data requirements.
Have few or no dependencies."
I think it's between options B & C since the third-party vendors already have a microservices architecture going on.
https://cloud.google.com/architecture/migrating-a-monolithic-app-to-microservices-
gke#:~:text=When%20you%20plan,for%20their%20migration.
upvoted 1 times
Selected Answer: A
A is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 235/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team develops services that run on Google Kubernetes Engine. Your team's code is stored in Cloud Source Repositories. You need to quickly
identify bugs in the code before it is deployed to production. You want to invest in automation to improve developer feedback and make the
A. Use Spinnaker to automate building container images from code based on Git tags.
B. Use Cloud Build to automate building container images from code based on Git tags.
D. Use Cloud Build to automate building container images from code based on forked versions.
Correct Answer: A
Reference:
https://spinnaker.io/docs/guides/tutorials/codelabs/kubernetes-v2-source-to-prod/
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Selected Answer: D
I say D.
Both B and D could work. However why not use traditional git workflows and keep separate branches for separate tasks. This way each image can
be tested independently.
upvoted 1 times
Selected Answer: B
Option B is appropriate because it uses Cloud Build, a service that can automatically build container images from code stored in Cloud Source
Repositories based on Git tags. This allows developers to quickly identify bugs in their code before it is deployed to production, by automating the
building process and improving developer feedback.
Option A uses Spinnaker, which is a multi-cloud continuous delivery platform that can automate building, testing, and deploying container images.
However, it does not specifically mention using git tags to trigger builds, thus for this particular use case it might not be the best fit.
upvoted 1 times
Option D uses Cloud Build, but it's not specific to building images based on git tags, it's more general and focuses on building images based on
forked versions, which might not be needed in this case.
upvoted 1 times
Selected Answer: B
B is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 236/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
D ) You need to quickly identify bugs in the code before it is deployed to production. So it's for developer to fork the code and test the build. Later
they can push the changes using PR to the master repo/branch.
upvoted 2 times
Selected Answer: B
Selected Answer: B
I think b is correct
upvoted 3 times
Selected Answer: D
I vote D. Because every developer before merge to the master should test build in his branch. It will expose bugs. Once branch merged to the
master, master build pipeline comes up.
upvoted 1 times
I would disagree with A as Spinnaker is for deployment not for building images
So either B or C : C is stating to deploy to production but the question is to give feedback to developer before it goes to production
Perhaps choice A was poorly written instead deploy build then it could be A
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 237/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is developing an application in Google Cloud that executes with user identities maintained by Cloud Identity. Each of your application's
users will have an associated Pub/Sub topic to which messages are published, and a Pub/Sub subscription where the same user will retrieve
published messages. You need to ensure that only authorized users can publish and subscribe to their own specific Pub/Sub topic and
A. Bind the user identity to the pubsub.publisher and pubsub.subscriber roles at the resource level.
B. Grant the user identity the pubsub.publisher and pubsub.subscriber roles at the project level.
C. Grant the user identity a custom role that contains the pubsub.topics.create and pubsub.subscriptions.create permissions.
D. Configure the application to run as a service account that has the pubsub.publisher and pubsub.subscriber roles.
Correct Answer: C
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
A. Bind the user identity to the pubsub.publisher and pubsub.subscriber roles at the resource level.
By binding the user identity to the pubsub.publisher and pubsub.subscriber roles at the resource level, you can ensure that each user can only
publish and subscribe to their specific Pub/Sub topic and subscription. This allows for granular permissions management and ensures that each
user can only access the resources they are authorized to.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 238/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
C. Granting the user identity a custom role that contains the pubsub.topics.create and pubsub.subscriptions.create permissions would allow user
to create topics and subscriptions but not access to their specific topic or subscription.
D. Configuring the application to run as a service account that has the pubsub.publisher and pubsub.subscriber roles would not provide
granular permissions management for the user.
upvoted 2 times
A is the answer.
upvoted 1 times
A is correct
upvoted 2 times
I think it should be option A since the authorization should be at the user level for a specific resource.
upvoted 2 times
Selected Answer: A
Vote A
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 239/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are evaluating developer tools to help drive Google Kubernetes Engine adoption and integration with your development environment, which
B. Use the Cloud Shell integrated Code Editor to edit code and configuration files.
C. Use a Cloud Notebook instance to ingest and process data and deploy models.
D. Use Cloud Shell to manage your infrastructure and applications from the command line.
Correct Answer: A
Reference:
https://cloud.google.com/code
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
Google Cloud code plugin can be installed on Intellij and VS code IDEs.
This provides very flexibility for developer to work with GKE platform.
upvoted 2 times
Cloud Code is a set of plugins for VS Code and IntelliJ that provides an integrated development experience for working with Kubernetes and
Google Cloud
upvoted 1 times
Selected Answer: A
Cloud Code is a set of plugins for VS Code and IntelliJ that provides an integrated development experience for working with Kubernetes and
Google Cloud. The plugins include features such as interactive cluster and resource management, one-click Kubernetes cluster creation, and built-
in debugging and diagnostics. It also supports to quickly deploy and debug applications using the Kubernetes and Google Cloud SDKs, Also, it
allows developers to easily perform tasks like deploying and debugging applications, managing resources, and running local development
environments. Cloud Code is a great tool for teams looking to streamline their development process for Kubernetes and Google Cloud.
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/code/docs
Cloud Code provides IDE support for the full development cycle of Kubernetes and Cloud Run applications, from creating and customizing a new
application from sample templates to running your finished application. Cloud Code supports you along the way with run-ready samples, out-of-
the-box configuration snippets, and a tailored debugging experience — making developing with Kubernetes and Cloud Run a whole lot easier!
upvoted 1 times
Selected Answer: A
A is correct
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 240/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 3 times
Selected Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 241/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an ecommerce web application that uses App Engine standard environment and Memorystore for Redis. When a user logs into
the app, the application caches the user's information (e.g., session, name, address, preferences), which is stored for quick retrieval during
checkout.
While testing your application in a browser, you get a 502 Bad Gateway error. You have determined that the application is not connecting to
A. Your Memorystore for Redis instance was deployed without a public IP address.
B. You configured your Serverless VPC Access connector in a different region than your App Engine instance.
C. The firewall rule allowing a connection between App Engine and Memorystore was removed during an infrastructure update by the DevOps
team.
D. You configured your application to use a Serverless VPC Access connector on a different subnet in a different availability zone than your
Correct Answer: A
Reference:
https://cloud.google.com/endpoints/docs/openapi/troubleshoot-response-errors
Selected Answer: B
B is the correct answer in this case, A is wrong because according to the best practice and security purpose gcp doesn't allow public ip for redis
server.
upvoted 6 times
Selected Answer: B
https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
In the Region field, select a region for your connector. This must match the region of your serverless service.
If your service or job is in the region us-central or europe-west, use us-central1 or europe-west1.
upvoted 1 times
1
The most likely reason for the 502 Bad Gateway error is that the firewall rule allowing a connection between App Engine and Memorystore was
removed during an infrastructure update by the DevOps team.
This is because App Engine needs to be able to connect to Memorystore in order to retrieve the cached user information. If the firewall rule is
removed, App Engine will not be able to connect to Memorystore and the application will fail.
upvoted 1 times
502 Bad gateway issue is common more in App Engine Flexible environments than Std due to memory issues. Here I go with option D since
pointing connector to a different subnet than App engine instance could cause Bad Gateway issue. A is not correct, because even with the different
region with the same subnet, App engine instance do not get issues while connecting to Memeorystore.
upvoted 1 times
Selected Answer: C
C.
A: No. The public IP is not mandatory.
B: No. App Engine instance region can be different with Serverless VPC Access connector.
Link here: https://support.google.com/a/answer/10620692?hl=en .
"We support VPC access connectors in 6 regions (us-central, us-west1, us-east1, asia-southeast1, asia-east1, and europe-west1). .... Note: Support
for additional regions is coming soon." Although the document didn't mention directly, what if an app in App Engine in southamerica-east1-a
would like to connect to Cloud SQL in us region? Note that the diagram here is REALLY mis-leading:
https://cloud.google.com/vpc/docs/serverless-vpc-access#example_2
C: Yes. This is the only possible answer.
D: No. Serverless VPC Access connector shall be configured with a different subnet. See:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 242/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/vpc/docs/configure-serverless-vpc-access#console
"Every connector requires its own /28 subnet to place connector instances on. A subnet cannot be used by other resources such as VMs, Private
Service Connect, or load balancers."
upvoted 2 times
Selected Answer: B
A is not correct because Cloud Run connects to Memorystore via the Serverless VPC Connector. Connections are over private networks. Public
addresses are not required.
B is correct. All of the components must be in the same region.
C is not correct because for connectivity between Cloud Run and Memorystore all that is required is a Serverless VPN Connector.
D is not correct. The Serverless VPC Connector is configured with a non-overlapping subnet that is not associated with the VPC.
upvoted 2 times
Selected Answer: D
While both B and D refer to the configuration of the Serverless VPC Access connector and could potentially cause issues with the application's
ability to connect to Memorystore, they are slightly different.
For B:
Having the connector in a different region than the App Engine instance could result in increased latency and potential connectivity issues, but it
would not necessarily prevent the App Engine instance from connecting to Memorystore.
For D:
This option is more specific and is indicating that if the connector is on different subnet or availability zone from App Engine instance it could
cause issues with the application's ability to connect to Memorystore, it is less likely for this situation to cause latency or performance issues, but it
will affect the connectivity of the App Engine to Memorystore.
Both B and D refer to misconfiguration of the Serverless VPC Access connector, but option D is more specific and directly relates to connectivity
issue and is more likely to be the root cause of the 502 Bad Gateway error encountered.
upvoted 1 times
Selected Answer: B
B is the answer.
https://cloud.google.com/vpc/docs/serverless-vpc-access#how_it_works
Serverless VPC Access is based on a resource called a connector. A connector handles traffic between your serverless environment and your VPC
network. When you create a connector in your Google Cloud project, you attach it to a specific VPC network and region. You can then configure
your serverless services to use the connector for outbound network traffic.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 243/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team develops services that run on Google Cloud. You need to build a data processing service and will use Cloud Functions. The data to be
processed by the function is sensitive. You need to ensure that invocations can only happen from authorized services and follow Google-
recommended best practices for securing functions. What should you do?
A. Enable Identity-Aware Proxy in your project. Secure function access using its permissions.
B. Create a service account with the Cloud Functions Viewer role. Use that service account to invoke the function.
C. Create a service account with the Cloud Functions Invoker role. Use that service account to invoke the function.
D. Create an OAuth 2.0 client ID for your calling service in the same project as the function you want to secure. Use those credentials to invoke
the function.
Correct Answer: C
Reference:
https://medium.com/google-cloud/how-to-securely-invoke-a-cloud-function-from-google-kubernetes-engine-running-on-another-gcp-
79797ec2b2c6
Selected Answer: C
For me C. In link1 we can see how google suggests to use service accounts and in link2 we can see that the invoker role exists.
Link1: https://cloud.google.com/functions/docs/securing#authentication Link2:
https://cloud.google.com/functions/docs/reference/iam/roles#cloud-functions-roles
upvoted 5 times
Selected Answer: C
IAP is not available for Cloud Functions, so the only possible option is C
upvoted 1 times
Selected Answer: A
1
The best way to ensure that invocations of a Cloud Function that processes sensitive data can only happen from authorized services and follows
Google-recommended best practices is to enable Identity-Aware Proxy in your project and secure function access using its permissions.
upvoted 1 times
Since this is service to service communication, cloud function invoker role should be provided to the service that wants to invoke cloud function in
the data processing pipeline.
upvoted 1 times
Selected Answer: C
vote c
upvoted 1 times
Selected Answer: A
1. Enable IAP to ensure that only authenticated and authorized users or services can access Cloud Function
2. Set up an appropriate level of access control using IAM roles and policies, such as roles/cloudfunctions.invoker, to ensure that only authorized
services can invoke your Cloud Function, This can be done by creating a service account for the calling function, assign the appropriate invoker role
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 244/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
to the service account on the data processing function and use the service account credentials in the calling function
3. Use Google-provided libraries or resources, such as KMS or Cloud Storage, to encrypt and store sensitive data
4. Apply security best practices such as limiting the scope of the service account, and using Cloud IAP to protect access to your Cloud Function
5. consider using Cloud Event that ensure your function is triggered only by authorized events, you can use Cloud Event to ensure that your
function is invoked only by specific event types that you have configured
upvoted 1 times
C is the answer.
https://cloud.google.com/functions/docs/securing/authenticating
upvoted 1 times
C is correct
upvoted 1 times
I think D is correct
upvoted 1 times
Vote C
upvoted 1 times
Selected Answer: D
Agreed D …
The tokens themselves are created using the OAuth 2 framework, and its extension, Open Identity Connect, but the sequence is complex and error-
prone, and the use of Cloud Client Libraries to manage the process is highly recommended.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 246/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are deploying your applications on Compute Engine. One of your Compute Engine instances failed to launch. What should you do? (Choose
two.)
Correct Answer: DE
Reference:
https://cloudacademy.com/course/deploying-applications-on-gcp-compute/deploying-applications-and-services-on-compute-engine/
Selected Answer: AD
Selected Answer: AD
Selected Answer: AD
https://cloud.google.com/compute/docs/troubleshooting/vm-startup#identify_the_reason_why_the_boot_disk_isnt_booting
- Verify that your boot disk is not full.
If your boot disk is completely full and your operating system does not support automatic resizing, you won't be able to connect to your instance.
You must create a new instance and recreate the boot disk.
AD is the answer.
https://cloud.google.com/compute/docs/troubleshooting/vm-startup#identify_the_reason_why_the_boot_disk_isnt_booting
- Verify that your boot disk is not full.
If your boot disk is completely full and your operating system does not support automatic resizing, you won't be able to connect to your instance.
You must create a new instance and recreate the boot disk.
Selected Answer: AD
AD are correct
upvoted 4 times
Selected Answer: AD
Selected Answer: AD
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 247/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Vote AD
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 248/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your web application is deployed to the corporate intranet. You need to migrate the web application to Google Cloud. The web application must be
available only to company employees and accessible to employees as they travel. You need to ensure the security and accessibility of the web
A. Configure the application to check authentication credentials for each HTTP(S) request to the application.
B. Configure Identity-Aware Proxy to allow employees to access the application through its public IP address.
C. Configure a Compute Engine instance that requests users to log in to their corporate account. Change the web application DNS to point to
the proxy Compute Engine instance. After authenticating, the Compute Engine instance forwards requests to and from the web application.
D. Configure a Compute Engine instance that requests users to log in to their corporate account. Change the web application DNS to point to
the proxy Compute Engine instance. After authenticating, the Compute Engine issues an HTTP redirect to a public IP address hosting the web
application.
Correct Answer: B
Selected Answer: B
I will go with B.
upvoted 1 times
This approach allows you to use Google Cloud infrastructure to authenticate users against the corporate intranet before providing access to the
web application, without making major changes to the web application. By configuring a Compute Engine instance as a proxy and changing the
web application's DNS to point to this proxy, you can ensure that only employees who have been authenticated against the corporate intranet are
able to access the web application. This approach also allows the employees to access the web application while they are traveling, as long as they
have internet access.
upvoted 2 times
However, in this scenario, since the web application is hosted on the corporate intranet, it will not have a public IP address and it will not be
accessible from the internet. And It's not possible to use IAP to restrict access to an intranet-hosted application by its IP address.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 249/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
That's why the best solution would be to use a VPN connection or a reverse proxy to allow employees to access the application as if they
were on the intranet while they are traveling or to secure the access to the intranet-hosted web application from the internet.
upvoted 1 times
B is correct.
upvoted 3 times
Selected Answer: B
B is the answer.
https://cloud.google.com/iap/docs/concepts-overview
IAP lets you establish a central authorization layer for applications accessed by HTTPS, so you can use an application-level access control model
instead of relying on network-level firewalls.
IAP policies scale across your organization. You can define access policies centrally and apply them to all of your applications and resources. When
you assign a dedicated team to create and enforce policies, you protect your project from incorrect policy definition or implementation in any
application.
upvoted 2 times
Selected Answer: B
B, while employees are traveling, they don't have access to the intranet, so they need to use the public IP. IAP secures the public endpoint.
upvoted 3 times
Selected Answer: C
C seems right
upvoted 3 times
I would completely agree with BackendBoi's comment. I would have picked option B only if it would have not been said to access through public
IP. Out of all the options, option C seems the best pick. I had read somewhere that the proxy compute engine is used for securing access to main
compute engine instance hosting application.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 250/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application that uses an HTTP Cloud Function to process user activity from both desktop browser and mobile application clients. This
function will serve as the endpoint for all metric submissions using HTTP POST.
Due to legacy restrictions, the function must be mapped to a domain that is separate from the domain requested by users on web or mobile
sessions. The domain for the Cloud Function is https://fn.example.com. Desktop and mobile clients use the domain https://www.example.com.
HTTP response so that only those browser and mobile sessions can submit metrics to the Cloud Function. Which response header should you
add?
A. Access-Control-Allow-Origin: *
B. Access-Control-Allow-Origin: https://*.example.com
C. Access-Control-Allow-Origin: https://fn.example.com
D. Access-Control-Allow-origin: https://www.example.com
Correct Answer: A
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 251/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
It is like requesting service from front end to back-end service. Here front-end service domain is https://www.example.com and back-end service
domain where cloud function runs is https://fn.example.com
upvoted 1 times
vote d
upvoted 2 times
Selected Answer: D
D is the answer.
https://cloud.google.com/functions/docs/samples/functions-http-cors
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
Selected Answer: D
I agree it should be D
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 252/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an HTTP Cloud Function that is called via POST. Each submission's request body has a flat, unnested JSON structure containing numeric
and text data. After the Cloud Function completes, the collected data should be immediately available for ongoing and complex analytics by many
B. Transform the POST request's JSON data, and stream it into BigQuery.
C. Transform the POST request's JSON data, and store it in a regional Cloud SQL cluster.
D. Persist each POST request's JSON data as an individual file within Cloud Storage, with the file name containing the request identifier.
Correct Answer: D
Selected Answer: B
B should be the correct one because question has mentioned for analytics of the data.
upvoted 15 times
Selected Answer: B
Selected Answer: B
The key here is "Collected data should be IMMEDIATELY available for ongoing and complex analytics", and hence option B is correct.
upvoted 1 times
"data should be immediately available for ongoing and complex analytics" -> B
upvoted 1 times
B. Transform the POST request's JSON data, and stream it into BigQuery.
BigQuery is a highly scalable data warehouse that is well suited for handling large amounts of data and complex analytics in near real-time. By
streaming the JSON data from your Cloud Function directly into BigQuery, you can make the collected data immediately available for analytics by
many users in parallel. BigQuery support various data types including json, so you can store your request body without any transformation.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 253/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
that you need to analyze in near real-time. And it would require additional processing to be available for analysis.
upvoted 1 times
B is the answer.
upvoted 1 times
B is correct
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 254/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your security team is auditing all deployed applications running in Google Kubernetes Engine. After completing the audit, your team discovers that
some of the applications send traffic within the cluster in clear text. You need to ensure that all application traffic is encrypted as quickly as
possible while minimizing changes to your applications and maintaining support from Google. What should you do?
B. Install Istio, enable proxy injection on your application namespace, and then enable mTLS.
C. Define Trusted Network ranges within the application, and configure the applications to allow traffic only from those networks.
D. Use an automated process to request SSL Certificates for your applications from Let's Encrypt and add them to your applications.
Correct Answer: A
I vote B
https://cloud.google.com/istio/docs/istio-on-gke/installing
(deprecated)
upvoted 8 times
Selected Answer: B
Istio is a service mesh that can be used to encrypt traffic between applications in a GKE cluster. It does this by injecting a sidecar proxy into each
pod. The sidecar proxy intercepts all traffic to and from the pod and encrypts it using mTLS (mutual TLS).
upvoted 1 times
Istio is suitable for providing cutting edge concerns to the services running in the GKE cluster. Istio provides security, fault tolerance and resiliency
out of the box.
upvoted 1 times
Selected Answer: B
B. Install Istio, enable proxy injection on your application namespace, and then enable mTLS.
Istio is a service mesh that runs within your Kubernetes cluster and provides a set of features, such as traffic management, service discovery, and
automatic encryption of traffic between services using mutual Transport Layer Security (mTLS). By installing Istio and enabling proxy injection on
your application namespace, you can quickly and easily enable mTLS for all traffic within the cluster without making changes to your applications.
Once the proxy injection is enabled, Istio automatically adds the necessary sidecar proxies to each pod in the namespace and configures them to
encrypt traffic.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 255/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
B is the answer.
https://cloud.google.com/istio/docs/istio-on-gke/overview
Istio gives you the following benefits:
- Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 3 times
Selected Answer: B
B should work. It's the only answer with a solution without blocking or restricting the cluster traffic
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 256/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You migrated some of your applications to Google Cloud. You are using a legacy monitoring platform deployed on-premises for both on-premises
and cloud- deployed applications. You discover that your notification system is responding slowly to time-critical problems in the cloud
C. Migrate some traffic back to your old platform. Perform A/B testing on the two platforms concurrently.
D. Use Cloud Logging and Cloud Monitoring to capture logs, monitor, and send alerts. Send them to your existing platform.
Correct Answer: D
Selected Answer: D
Selected Answer: D
I will go with D.
upvoted 1 times
Selected Answer: D
D, but if solution used is GCE logging and monitoring wouldn't be there since GCE do not have direct integration.
Selected Answer: D
D. Use Cloud Logging and Cloud Monitoring to capture logs, monitor, and send alerts. Send them to your existing platform.
is a valid option if your aim to integrate the on-premise monitoring platform with the cloud monitoring platform, this way you can have a holistic
view of all your application performance.
You can also use Google Cloud's Stackdriver service to integrate the monitoring, logging and tracing across both on-premise and cloud. Stackdriver
can be used to get unified view of all your application performance and trace the root cause of an issue.
upvoted 1 times
You can use Cloud Monitoring to discover resources running in your on-premises infrastructure by using the Cloud Monitoring Agent that can
be installed on the machines running on-premises. it will help you to monitor on-premise machines with Cloud Monitoring.
upvoted 1 times
Selected Answer: D
D is the answer.
upvoted 1 times
Selected Answer: D
Selected Answer: D
D is correct
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 257/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
Selected Answer: D
Vote D
upvoted 1 times
Selected Answer: A
I vote for A
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 258/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently deployed your application in Google Kubernetes Engine, and now need to release a new version of your application. You need the
ability to instantly roll back to the previous version in case there are issues with the new version. Which deployment model should you use?
A. Perform a rolling deployment, and test your new application after the deployment is complete.
B. Perform A/B testing, and test your application periodically after the new tests are implemented.
C. Perform a blue/green deployment, and test your new application after the deployment is. complete.
D. Perform a canary deployment, and test your new application periodically after the new version is deployed.
Correct Answer: D
Option C is correct
upvoted 8 times
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
The key here is rolling back to the previous deployments if we find issues with the current(latest) deployment. With Canary, only certain portion of
the traffic is allowed to newer version that does on the fly testing. With A/B, certain portion of the traffic is split to the dedicated testers to confirm
everything is fine with the newer version.
upvoted 1 times
Selected Answer: C
C. Perform a blue/green deployment, and test your new application after the deployment is complete.
A Blue/Green deployment is a technique that allows you to release new versions of an application while maintaining the ability to roll back to the
previous version if there are issues. It works by having two identical production environments: one, the "green" environment, that is serving traffic,
and another, the "blue" environment, that is idle. When you want to release a new version of your application, you deploy it to the "blue"
environment, test it to make sure it is working as expected and then switch traffic to the "blue" environment.
This way you can have zero-downtime deployment and if there's any issues with the new version you can easily roll back to the previous version by
switching the traffic back to the green environment.
upvoted 1 times
C is the answer.
https://cloud.google.com/architecture/application-deployment-and-testing-strategies#choosing_the_right_strategy
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 259/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: C
C seems correct
upvoted 2 times
Vote C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 260/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You developed a JavaScript web application that needs to access Google Drive's API and obtain permission from users to store files in their
Google Drives. You need to select an authorization approach for your application. What should you do?
Correct Answer: D
Reference:
https://developers.google.com/drive/api/v3/about-auth
Selected Answer: C
OAuth 2.0 is an authorization framework that enables applications to obtain limited access to user accounts on an HTTP service, such as Google
Drive. OAuth 2.0 is the preferred authorization approach for JavaScript web applications because it provides a secure and user-friendly way to
obtain permission from users to access their Google Drive accounts.
upvoted 1 times
Selected Answer: D
We need to have Oauth 2.1 flow. The client app should have client id and secret key generated from Google drive application. This way the user
can login to their google drive account and can perform CRUD operations. The best thing here is, the client app is not aware of the user
credentials, and it is very secure. The most common way of getting access token is from authorization code flow with PKCE. PKCE, since it is a JS
client app.
upvoted 1 times
OAuth is an authorization framework that allows third-party applications to access resources on behalf of a user, without having to handle the
user's credentials. To use Google Drive's API, your application needs to obtain permission from the user to access their Google Drive, and the best
way to do this is through OAuth.
You would need to create an OAuth 2.0 client ID and integrate it into your application. This will allow your application to redirect users to the
Google OAuth 2.0 server, where they can grant permission to your application to access their Google Drive.
upvoted 1 times
D is the answer.
https://developers.google.com/drive/api/guides/api-specific-auth
upvoted 1 times
D is correct
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 261/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 3 times
Selected Answer: D
Yes! it's D.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 262/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You manage an ecommerce application that processes purchases from customers who can subsequently cancel or change those purchases. You
discover that order volumes are highly variable and the backend order-processing system can only process one request at a time. You want to
ensure seamless performance for customers regardless of usage volume. It is crucial that customers' order update requests are performed in the
A. Send the purchase and change requests over WebSockets to the backend.
B. Send the purchase and change requests as REST requests to the backend.
C. Use a Pub/Sub subscriber in pull mode and use a data store to manage ordering.
D. Use a Pub/Sub subscriber in push mode and use a data store to manage ordering.
Correct Answer: B
I vote C
https://cloud.google.com/pubsub/docs/pull
upvoted 9 times
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
Pull model so that application handle the requests by pulling requests one by one. This is called event driven architecture where the response to
client from the app will happen asynchronously.
upvoted 1 times
C. Use a Pub/Sub subscriber in pull mode and use a data store to manage ordering.
To ensure that customer order update requests are performed in the sequence in which they were generated, the recommended approach is to use
a Pub/Sub subscriber in pull mode, together with a data store to manage ordering.
This approach allows the backend system to process requests one at a time, while maintaining the order of requests. By using a pull-based
subscription, the backend system can control the rate at which messages are consumed from the Pub/Sub topic, and can ensure that requests are
processed in the correct order. The data store can be used to maintain a queue of requests, where each request is added to the queue in the order
that it was generated, and then processed by the backend system.
upvoted 1 times
B. Send the purchase and change requests as REST requests to the backend.
Sending the request as REST does not ensure that requests are processed in the order they were generated, it also would not allow controlling
the rate at which requests are consumed.
D. Use a Pub/Sub subscriber in push mode and use a data store to manage ordering.
Push-based subscription don't allow controlling the rate at which requests are consumed, it also may not ensure that requests are processed in
the order they were generated.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 263/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: C
C is the answer.
upvoted 1 times
C is correct
upvoted 2 times
Correct answer C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 264/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company needs a database solution that stores customer purchase history and meets the following requirements:
Correct Answer: A
Selected Answer: A
Firestore in Native mode is a NoSQL document database that is designed for scalability, performance, and ease of use. It is a good choice for
storing customer purchase history because it meets all of the requirements
upvoted 1 times
Selected Answer: A
Firestore is for storing semi structured data. It is optimized for high reads and low writes. Since each document can store different collection types,
( MONGO DB ), fire store is suitable for the above requirements.
upvoted 1 times
Firestore in Native mode satisfies these requirements. It is a NoSQL document database, which means that it stores semi-structured data, and each
document can have its own fields and structure. This allows for storing distinct record formats at the same time, which is a requirement. Firestore
also has strong query performance and support, customers can query their purchase immediately after submission, and purchases can be sorted on
a variety of fields, it is highly optimized to support real-time queries, you can retrieve data with low latency.
upvoted 1 times
A is the answer.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 265/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
@megn they mean that each record can have a different shape, the data is not consistent.
upvoted 1 times
A is correct
upvoted 2 times
Firestore is the next major version of Datastore and a re-branding of the product. Taking the best of Datastore and the Firebase Realtime Database,
Firestore is a NoSQL document database built for automatic scaling, high performance, and ease of application development.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 266/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently developed a new service on Cloud Run. The new service authenticates using a custom service and then writes transactional
information to a Cloud
Spanner database. You need to verify that your application can support up to 5,000 read and 1,000 write transactions per second while identifying
any bottlenecks that occur. Your test infrastructure must be able to autoscale. What should you do?
A. Build a test harness to generate requests and deploy it to Cloud Run. Analyze the VPC Flow Logs using Cloud Logging.
B. Create a Google Kubernetes Engine cluster running the Locust or JMeter images to dynamically generate load tests. Analyze the results
C. Create a Cloud Task to generate a test load. Use Cloud Scheduler to run 60,000 Cloud Task transactions per minute for 10 minutes. Analyze
D. Create a Compute Engine instance that uses a LAMP stack image from the Marketplace, and use Apache Bench to generate load tests
Correct Answer: B
I vote B
https://cloud.google.com/architecture/distributed-load-testing-using-gke
upvoted 6 times
Selected Answer: B
B is correct.
upvoted 1 times
The key here is "Your test infrastructure must be able to autoscale" and load testing.
upvoted 1 times
Selected Answer: B
B. Create a Google Kubernetes Engine cluster running the Locust or JMeter images to dynamically generate load tests. Analyze the results using
Cloud Trace.
To verify that your application can support up to 5,000 read and 1,000 write transactions per second and to identify any bottlenecks that occur, you
can use a load testing tool such as Locust or JMeter to generate load tests on your Cloud Run service. These tools allow you to simulate a high
number of concurrent requests and help you determine the maximum number of requests your service can handle.
You can run the load testing tool on a Google Kubernetes Engine (GKE) cluster which will support autoscale feature, this way you can handle the
high number of requests, and use Cloud Trace to analyze the results, which will give you insights into the performance and any bottlenecks.
upvoted 1 times
C. Create a Cloud Task to generate a test load. Use Cloud Scheduler to run 60,000 Cloud Task transactions per minute for 10 minutes. Analyze
the results using Cloud Monitoring.
Although cloud task is a good solution for scheduling the test loads, it's not the best solution for load testing since it doesn't support dynamic
loading and it would be hard to get the fine-grained details about the performance.
D. Create a Compute Engine instance that uses a LAMP stack image from the Marketplace, and use Apache Bench to
upvoted 1 times
B is the answer.
https://cloud.google.com/architecture/distributed-load-testing-using-gke
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 267/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
B is correct
upvoted 2 times
Selected Answer: B
This tutorial explains how to use Google Kubernetes Engine (GKE) to deploy a distributed load testing framework that uses multiple containers to
create traffic for a simple REST-based API. This tutorial load-tests a web application deployed to App Engine that exposes REST-style endpoints to
respond to incoming HTTP POST requests.
You can use this same pattern to create load testing frameworks for a variety of scenarios and applications, such as messaging systems, data
stream management systems, and database systems.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 268/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are using Cloud Build for your CI/CD pipeline to complete several tasks, including copying certain files to Compute Engine virtual machines.
Your pipeline requires a flat file that is generated in one builder in the pipeline to be accessible by subsequent builders in the same pipeline. How
should you store the file so that all the builders in the pipeline can access it?
A. Store and retrieve the file contents using Compute Engine instance metadata.
B. Output the file contents to a file in /workspace. Read from the same /workspace file in the subsequent build step.
C. Use gsutil to output the file contents to a Cloud Storage object. Read from the same object in the subsequent build step.
D. Add a build argument that runs an HTTP POST via curl to a separate web server to persist the value in one builder. Use an HTTP GET via curl
Correct Answer: D
I vote B
https://cloud.google.com/build/docs/build-config-file-schema
upvoted 10 times
Selected Answer: B
Correct answer is B. Save your flat file under /workspace folder and hence the same file can be used for other build steps. Very simple and straight
forward approach though. :)
upvoted 1 times
Selected Answer: B
I vote B
https://cloud.google.com/build/docs/build-config-file-schema
upvoted 1 times
The best approach is to output the file contents to a file in /workspace directory in one build step and read from the same /workspace file in the
subsequent build step . This way, the file is easily accessible by all builders in the pipeline as they all run in the same environment and share the
same file system. And it's the easiest and simplest way of sharing the file between the builds in the pipeline.
upvoted 1 times
B is the answer.
https://cloud.google.com/build/docs/configuring-builds/pass-data-between-steps#passing_data_using_workspaces
To pass data between build steps, store the assets produced by the build step in /workspace and these assets will be available to any subsequent
build steps.
upvoted 1 times
https://cloud.google.com/build/docs/build-config-file-schema
Use the dir field in a build step to set a working directory to use when running the step's container. If you set the dir field in the build step, the
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 269/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
working directory is set to /workspace/<dir>. If this value is a relative path, it is relative to the build's working directory. If this value is absolute, it
may be outside the build's working directory, in which case the contents of the path may NOT be persisted across build step executions
upvoted 1 times
B is correct
upvoted 2 times
Selected Answer: B
To pass data between build steps, store the assets produced by the build step in /workspace and these assets will be available to any subsequent
build steps.
upvoted 1 times
agree with b
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 270/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company’s development teams want to use various open source operating systems in their Docker builds. When images are created in
published containers in your company’s environment, you need to scan them for Common Vulnerabilities and Exposures (CVEs). The scanning
process must not impact software development agility. You want to use managed services where possible. What should you do?
B. Create a Cloud Function that is triggered on a code check-in and scan the code for CVEs.
C. Disallow the use of non-commercially supported base images in your development environment.
D. Use Cloud Monitoring to review the output of Cloud Build to determine whether a vulnerable version has been used.
Correct Answer: A
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
A is a very straight forward option. One more choice would be using vulnerability scanning tools like Grype ( open source ) in the build step itself
with cloud build.
upvoted 1 times
A. Enable the Vulnerability scanning setting in the Container Registry would be the best solution in this case.
It would allow you to automatically scan images for known vulnerabilities and detect any issues as soon as they're pushed to the registry. This will
help to identify vulnerabilities early in the development cycle, allowing the development teams to take action before images are deployed to
production. This approach is automated, does not impact development agility and since it is a built-in feature of the Container Registry, it is a
managed service and therefore, it does not require additional maintenance and management.
upvoted 2 times
Option C, Disallow the use of non-commercially supported base images in the development environment, would limit the flexibility of the
development teams, and they may not be able to use the best tools for the job which can negatively impact the quality of the end-product.
Option D, Use Cloud Monitoring to review the output of Cloud Build to determine whether a vulnerable version has been used, is a good
practice to detect and alert on potential issues as soon as possible, but it is an additional step that needs to be set up and maintained.
Additionally, it does not handle the vulnerability scanning on its own but rather acts as an additional layer of security.
upvoted 2 times
A is the answer.
https://cloud.google.com/container-analysis/docs/os-overview
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 271/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are configuring a continuous integration pipeline using Cloud Build to automate the deployment of new container images to Google
Kubernetes Engine (GKE). The pipeline builds the application from its source code, runs unit and integration tests in separate steps, and pushes
the container to Container Registry. The application runs on a Python web server.
FROM python:3.7-alpine -
COPY . /app -
WORKDIR /app -
You notice that Cloud Build runs are taking longer than expected to complete. You want to decrease the build time. What should you do? (Choose
two.)
A. Select a virtual machine (VM) size with higher CPU for Cloud Build runs.
B. Deploy a Container Registry on a Compute Engine VM in a VPC, and use it to store the final images.
C. Cache the Docker image for subsequent builds using the -- cache-from argument in your build config file.
D. Change the base image in the Dockerfile to ubuntu:latest, and install Python 3.7 using a package manager utility.
E. Store application source code on Cloud Storage, and configure the pipeline to use gsutil to download the source code.
Correct Answer: CE
Selected Answer: AC
AC is correct.
upvoted 1 times
Selected Answer: AC
Apart from A and C, one more good option would be to copy the app directory only after RUN pip install so that we can avoid this copying part
repeatedly after each layer build.
upvoted 1 times
A is correct because a high-CPU virtual machine type can increase the speed of your build.
B is not correct because a Container Registry on a VM will not speed up the build.
C is correct because the same container is used in subsequent steps for testing and to be pushed to the registry.
D is not correct because an ubuntu container image will be significantly larger than the python:3.7-alpine image.
E is not correct because storing the application source code on Cloud Storage does not decrease the time to build the application.
upvoted 2 times
https://cloud.google.com/build/docs/optimize-builds/increase-vcpu-for-builds
https://cloud.google.com/build/docs/optimize-builds/building-leaner-containers#building_leaner_containers
Yes, answer A and C are both valid solutions based on the articles you linked.
Increasing the number of vCPUs allocated to the Cloud Build VM can help to decrease build time because it provides the build environment with
more CPU resources to use, which can help to speed up the build process. This can be achieved by selecting a VM size with higher CPU for Cloud
Build runs.
as mentioned, caching the Docker image for subsequent builds can also help to decrease build time by reusing previously built image layers. This
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 272/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
can be achieved by adding the --cache-from argument to the build command in the build config file, which tells Cloud Build to use the specified
images as a cache source.
upvoted 1 times
In summary, option A and C are the best solutions that can help to optimize the CI/CD pipeline in this scenario as they directly impact the build
process and it also depend on the current infrastructure and requirements of your pipeline if you consider using other options.
upvoted 1 times
Selected Answer: AC
AC is the answer.
https://cloud.google.com/build/docs/optimize-builds/increase-vcpu-for-builds
By default, Cloud Build runs your builds on a standard virtual machine (VM). In addition to the standard VM, Cloud Build provides several high-CPU
VM types to run builds. To increase the speed of your build, select a machine with a higher vCPU to run builds. Keep in mind that although
selecting a high vCPU machine increases your build speed, it may also increase the startup time of your build as Cloud Build only starts non-
standard machines on demand.
https://cloud.google.com/build/docs/optimize-builds/speeding-up-builds#using_a_cached_docker_image
The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can
specify the cached image by adding the --cache-from argument in your build config file, which will instruct Docker to build using that image as a
cache source.
upvoted 1 times
Selected Answer: AC
IMHO
D - alpine is a much smaller distro
B and E - does not make sense
https://cloud.google.com/build/docs/optimize-builds/increase-vcpu-for-builds
https://cloud.google.com/build/docs/optimize-builds/speeding-up-builds
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 273/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are building a CI/CD pipeline that consists of a version control system, Cloud Build, and Container Registry. Each time a new tag is pushed to
the repository, a Cloud Build job is triggered, which runs unit tests on the new code builds a new Docker container image, and pushes it into
Container Registry. The last step of your pipeline should deploy the new container to your production Google Kubernetes Engine (GKE) cluster. You
need to select a tool and deployment strategy that meets the following requirements:
A. Trigger a Spinnaker pipeline configured as an A/B test of your new code and, if it is successful, deploy the container to production.
B. Trigger a Spinnaker pipeline configured as a canary test of your new code and, if it is successful, deploy the container to production.
C. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform
a canary test.
D. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform
a shadow test.
Correct Answer: D
Selected Answer: B
Spinnaker is a cloud native continuous delivery platform that can be used to deploy applications to a variety of cloud providers, including Google
Kubernetes Engine (GKE). Spinnaker is a good choice for deploying applications to GKE because it provides a number of features that make it easy
to deploy applications quickly and reliably, including:
Canary deployments: Canary deployments allow you to deploy a new version of your application to a small subset of users before rolling it out to
all users. This allows you to test the new version of your application and identify any problems before they impact all of your users.
Rollback: Spinnaker can be used to quickly rollback to a previous version of your application if you encounter any problems with the new version.
upvoted 1 times
Shadow testing is the right choice. Canary is not suitable here since the requirement is to test before rolling new version to users. Option A also
comes very closer since it has A/B testing that is done only after releasing the newer version to users that is only small amount of traffic is diverted
to dedicated users (testers) who gives faster feedback about newer product/service.
upvoted 1 times
Option D, triggering another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can
perform a shadow test, could meet the requirements you specified.
Shadow testing is a technique where you can test the new version of an application by mirroring user traffic to it, without impacting the user
requests to the current version. This way, you can test the new version of your application in a real-world environment with real user traffic, which
allows for testing before being rolled out to users and allows for a quick rollback if needed. And with the use of Kubernetes CLI tools you can
automate this process, so the testing and deployment is fully automated.
upvoted 2 times
Selected Answer: D
D is the answer.
https://cloud.google.com/architecture/implementing-deployment-and-testing-strategies-on-gke#perform_a_shadow_test
With a shadow test, you test the new version of your application by mirroring user traffic from the current application version without impacting
the user requests.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 274/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
vote D
upvoted 1 times
IMHO by eliminating
B and C - uses canary which letting the users use the new version without testing
A - canary is often a synonym of A/B testing
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 275/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your operations team has asked you to create a script that lists the Cloud Bigtable, Memorystore, and Cloud SQL databases running within a
project. The script should allow users to submit a filter expression to limit the results presented. How should you retrieve the data?
A. Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Combine the results, and then apply the filter to display the
results
B. Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Filter the results individually, and then combine them to
C. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use a filter within the application, and then
D. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use --filter flag with each command, and then
Correct Answer: A
Selected Answer: D
D is correct
upvoted 1 times
Selected Answer: D
Easy and simple. List all the different types of instances and apply '--filter' option in a command.
upvoted 1 times
Selected Answer: D
Option D is correct, running gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list and using the --filter flag with
each command can be used to filter the results before displaying them. This would allow users to submit a filter expression to limit the results
presented as specified in the question. As per the google official documentation.
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/sdk/gcloud/reference/topic/filters
Most gcloud commands return a list of resources on success. By default they are pretty-printed on the standard output. The --
format=NAME[ATTRIBUTES](PROJECTION) and --filter=EXPRESSION flags along with projections can be used to format and change the default
output to a more meaningful result.
Use the --format flag to change the default output format of a command. For details run $ gcloud topic formats.
Use the --filter flag to select resources to be listed. Resource filters are described in detail below.
upvoted 2 times
Selected Answer: D
option D
upvoted 1 times
Selected Answer: D
vote D
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 276/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 277/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to deploy a new European version of a website hosted on Google Kubernetes Engine. The current and new websites must be accessed
via the same HTTP(S) load balancer's external IP address, but have different domain names. What should you do?
A. Define a new Ingress resource with a host rule matching the new domain
B. Modify the existing Ingress resource with a host rule matching the new domain
C. Create a new Service of type LoadBalancer specifying the existing IP address as the loadBalancerIP
D. Generate a new Ingress resource and specify the existing IP address as the kubernetes.io/ingress.global-static-ip-name annotation value
Correct Answer: A
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Right answer is B. Existing Ingress resource needs to be updated to add new domain for the new service that runs within the cluster of worker
nodes. It looks like this:
User ---> HTTP(S) Load balance IP --------> Domain 1 -----> Older version of application.
--------> Domain 2 -----> New version of application.
upvoted 1 times
https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip
B. You should modify the existing Ingress resource with a host rule matching the new domain. This will allow you to route traffic to the new website
while still using the same IP address and load balancer. This approach allows you to use name-based virtual hosting, which supports routing HTTP
traffic to multiple host names at the same IP address. It also enables you to reuse the existing IP address and load balancer, which means that the
existing website and the new website can be accessed through the same IP address while having different domain names.
upvoted 1 times
Selected Answer: B
B is the answer.
https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
upvoted 1 times
Selected Answer: D
Answer is D
https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip
upvoted 1 times
"must be accessed via the same HTTP(S) load balancer's external IP address" means re-use the existing ingress resource
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 278/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a single-player mobile game backend that has unpredictable traffic patterns as users interact with the game throughout the
day and night. You want to optimize costs by ensuring that you have enough resources to handle requests, but minimize over-provisioning. You
also want the system to handle traffic spikes efficiently. Which compute platform should you use?
A. Cloud Run
Correct Answer: B
Selected Answer: A
Cloud Run is the cheapest solution among these options and can scale up and down to 0 instances.
upvoted 1 times
Selected Answer: D
Google Kubernetes Engine (GKE) is a managed Kubernetes service that allows you to deploy and run containerized applications. GKE is a good
choice for running a single-player mobile game backend because it can be easily scaled up or down to meet the needs of your game.
Cloud Run is a serverless computing platform that allows you to run code without managing servers. Cloud Run is a good choice for running
simple applications, but it is not as scalable as GKE.
upvoted 1 times
I go with A. The requirement is to optimize the cost while scaling for unexpected spikes in the traffic. Cloud Run is the cheapest among all the
other options given.
upvoted 1 times
Selected Answer: D
Bing chose D: For a single-player mobile game backend with unpredictable traffic patterns and a need to optimize costs while handling traffic
spikes efficiently, Google Kubernetes Engine (GKE) using cluster autoscaling (option D) would be a good choice. GKE’s cluster autoscaler
automatically resizes the number of nodes in a node pool based on the demands of your workloads. This helps ensure that you have enough
resources to handle requests while minimizing over-provisioning and optimizing costs.
upvoted 1 times
Selected Answer: A
Compute Engine answers are eliminated because they can't scale quickly enough.
GKE Answer is ruled out because you can end up overprovisioned, also cannot scale out to add more nodes quickly enough.
upvoted 4 times
A is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 279/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
vote D
upvoted 1 times
Selected Answer: D
https://cloud.google.com/blog/products/containers-kubernetes/gke-best-practices-to-lessen-over-provisioning
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 280/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
The development teams in your company want to manage resources from their local environments. You have been asked to enable developer
access to each team’s Google Cloud projects. You want to maximize efficiency while following Google-recommended best practices. What should
you do?
A. Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project ID.
B. Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project Number.
C. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project
ID.
D. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project
Number.
Correct Answer: B
Selected Answer: C
This is the most efficient and secure way to enable developer access to Google Cloud projects. By creating groups and assigning roles to the
groups, you can minimize the administrative overhead of managing user permissions. You can also provide developers with access to the projects
they need, while limiting their access to other resources.
upvoted 1 times
Selected Answer: C
I choose C. Adding users to a group and assigning the role to a group is a good practice as IAM is concerned. The project ID is the more user-
friendly identifier and the one which most Cloud APIs and user interfaces use when interfacing with you, the customer.
The project number is an internal implementation detail and is the key that most Google Cloud services use for storing data in their databases;
most API calls implicitly translate the ID to the number when performing queries for project details.
upvoted 1 times
Selected Answer: C
C is the answer.
upvoted 1 times
Selected Answer: C
option C
upvoted 1 times
Selected Answer: C
vote C
upvoted 2 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 281/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company’s product team has a new requirement based on customer demand to autoscale your stateless and distributed service running in a
Google Kubernetes Engine (GKE) duster. You want to find a solution that minimizes changes because this feature will go live in two weeks. What
A. Deploy a Vertical Pod Autoscaler, and scale based on the CPU load.
C. Deploy a Horizontal Pod Autoscaler, and scale based on the CPU toad.
Correct Answer: A
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
Since minimum number of changes, I go with C. Scaling based on the custom metrics might take more time compared to built in CPU load metric.
Also, we need to see that application is stateless. So simple CPU metric is enough as a scaling parameter.
upvoted 1 times
Selected Answer: C
C is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in
response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics
from sources outside of your cluster.
upvoted 2 times
AB are wrong because it is recommended to start with HPA if you have nothing
D would take time and effort since you have to tune the metric
C is right because is the most simple entry level solution for autoscaling due the unknown new requirements
upvoted 3 times
I think D is option.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 282/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: C
there are too many typos here but if it is really typo then the answer is C
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 283/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is composed of a set of loosely coupled services orchestrated by code executed on Compute Engine. You want your application
to easily bring up new Compute Engine instances that find and use a specific version of a service. How should this be configured?
A. Define your service endpoint information as metadata that is retrieved at runtime and used to connect to the desired service.
B. Define your service endpoint information as label data that is retrieved at runtime and used to connect to the desired service.
C. Define your service endpoint information to be retrieved from an environment variable at runtime and used to connect to the desired
service.
D. Define your service to use a fixed hostname and port to connect to the desired service. Replace the service at the endpoint with your new
version.
Correct Answer: C
Selected Answer: A
The best answer is: A. Define your service endpoint information as metadata that is retrieved at runtime and used to connect to the desired service.
This is the most flexible and scalable way to configure your application to easily bring up new Compute Engine instances that find and use a
specific version of a service.
upvoted 1 times
It is either A or C.
We can define a host URL as metadata in a virtual machine instance that orchestrates different services based on urls defined as metadata. One
more way is to retrieve the urls from environment variables. Environment variables can be passed from,
1) command line
2) docker file
3) kubernetes deployment descriptor
4) through config server - application properities / yml file and so on.
The easier way is to define it as metadata in the compute engine instance itself.
upvoted 2 times
I think B
upvoted 1 times
Selected Answer: B
Answer is [B] .
upvoted 1 times
Selected Answer: B
An example of how you can retrieve the endpoint information from a label in Python:
import google.auth
from google.cloud import compute
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 284/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Ansuwer is A:
Labels are used to categorize and organize resources in Google Cloud Platform, such as Compute Engine instances. While they can also be used
to store endpoint information, they may not be as flexible as metadata when it comes to dynamically retrieving information at runtime.
Additionally, labels are associated with individual resources, so updating the label data would require modifying the specific resource, rather
than a centralized metadata store.
In some cases, using labels may be more appropriate, such as when you want to categorize and organize your resources, but for managing
service endpoints in a loosely coupled architecture, metadata is generally a more flexible and scalable solution.
upvoted 4 times
A is the answer.
https://cloud.google.com/service-infrastructure/docs/service-metadata/reference/rest#service-endpoint
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 285/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a microservice-based application that will run on Google Kubernetes Engine (GKE). Some of the services need to access
different Google Cloud APIs. How should you set up authentication of these services in the cluster following Google-recommended best
B. Enable Workload Identity in the cluster via the gcloud command-line tool.
C. Access the Google service account keys from a secret management service.
D. Store the Google service account keys in a central secret management service.
E. Use gcloud to bind the Kubernetes service account and the Google service account using roles/iam.workloadIdentity.
Correct Answer: CE
Selected Answer: BE
BE is correct.
upvoted 1 times
Selected Answer: BE
Selected Answer: BE
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
upvoted 1 times
A is incorrect. While it could work, all the services are using the same service account, there is no separation of permissions, and no detailed
logging.
B and E together connect GKE and Google service accounts, so GKE can authenticate a service with a Google service account.
C is incorrect. While this is feasible, it’s not the recommended practice for workload identity because of the mandatory key rotation of the service
accounts.
D is incorrect. While this is feasible, it’s not the recommended practice for workload identity because of the mandatory key rotation of the service
accounts.
E and B together connect GKE and Google service accounts, so GKE can authenticate a service with a Google service account.
upvoted 2 times
https://cloud.google.com/kubernetes-engine/docs/how-to/kubernetes-service-accounts
Answer E
upvoted 2 times
Selected Answer: BE
BE is the answer.
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 286/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your development team has been tasked with maintaining a .NET legacy application. The application incurs occasional changes and was recently
updated. Your goal is to ensure that the application provides consistent results while moving through the CI/CD pipeline from environment to
environment. You want to minimize the cost of deployment while making sure that external factors and dependencies between hosting
environments are not problematic. Containers are not yet approved in your organization. What should you do?
A. Rewrite the application using .NET Core, and deploy to Cloud Run. Use revisions to separate the environments.
B. Use Cloud Build to deploy the application as a new Compute Engine image for each build. Use this image in each environment.
C. Deploy the application using MS Web Deploy, and make sure to always use the latest, patched MS Windows Server base image in Compute
Engine.
D. Use Cloud Build to package the application, and deploy to a Google Kubernetes Engine cluster. Use namespaces to separate the
environments.
Correct Answer: A
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
The key is containers are not supported. The best possible option is, use cloud build to build the application and deploy under the virtual machine
instance. Create a snapshot of the disk and create image out of it. Or create an image directly. Use this image as instance template for other
environments.
upvoted 1 times
Selected Answer: B
https://cloud.google.com/architecture/modernization-path-dotnet-applications-google-cloud#take_advantage_of_compute_engine
The reason why B is better than D, hence had to paste the link above.
Answer B
upvoted 2 times
Selected Answer: B
B is the answer.
https://cloud.google.com/architecture/modernization-path-dotnet-applications-google-cloud#phase_1_rehost_in_the_cloud
upvoted 2 times
Selected Answer: B
https://cloud.google.com/architecture/modernization-path-dotnet-applications-google-cloud
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 287/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
The new version of your containerized application has been tested and is ready to deploy to production on Google Kubernetes Engine. You were
not able to fully load-test the new version in pre-production environments, and you need to make sure that it does not have performance problems
once deployed. Your deployment must be automated. What should you do?
A. Use Cloud Load Balancing to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues.
B. Deploy the application via a continuous delivery pipeline using canary deployments. Use Cloud Monitoring to look for performance issues.
C. Deploy the application via a continuous delivery pipeline using blue/green deployments. Use Cloud Monitoring to look for performance
D. Deploy the application using kubectl and set the spec.updateStrategv.type to RollingUpdate. Use Cloud Monitoring to look for performance
issues, and run the kubectl rollback command if there are any issues.
Correct Answer: A
Selected Answer: B
B. Deploy the application via a continuous delivery pipeline using canary deployments. Use Cloud Monitoring to look for performance issues, and
ramp up traffic as the metrics support it.
upvoted 1 times
I go with B. The key is depolyment must be automated. With canary and CI/CD pipeline in place, we can adjust the traffic based on the input from
Canary users.
upvoted 1 times
Selected Answer: B
Selected Answer: B
B. Deploy the application via a continuous delivery pipeline using canary deployments. Use Cloud Monitoring to look for performance issues, and
ramp up traffic as the metrics support it.
Canary deployment strategy can be used to mitigate risk in the production deployment process. In this strategy, a small subset of traffic is routed
to the new version of the application, while the rest of the traffic is sent to the current version. This allows for real-time monitoring of the new
version's performance before fully rolling it out to all users. If there are any issues or performance problems, the traffic can be immediately routed
back to the previous version. Cloud Monitoring can be used to monitor performance metrics and make informed decisions about when to ramp up
traffic to the new version
upvoted 3 times
Selected Answer: D
i'd choose d.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 288/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Based on the link provided by the guys on the comments, after reviewing the links, I can see that
Option A "Use Cloud Load Balancing to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues" is a good
approach, using Cloud Load Balancing, traffic is gradually shifted between the versions, and by using Cloud monitoring, you can detect any
performance issues early on.
upvoted 1 times
Option D "Deploy the application using kubectl and set the spec.updateStrategy.type to RollingUpdate. Use Cloud Monitoring to look for
performance issues, and run the kubectl rollback command if there are any issues" is also a good option, in this case as well you are
incrementally rolling out the new version, and monitoring its performance, if any issues occur, you can roll back the update.
upvoted 1 times
Answer D
https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps#overview
Kindly master the requirements of the question, and be very aware of the question's key words
upvoted 2 times
C is the answer.
https://cloud.google.com/architecture/implementing-deployment-and-testing-strategies-on-gke#perform_a_bluegreen_deployment
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 289/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Users are complaining that your Cloud Run-hosted website responds too slowly during traffic spikes. You want to provide a better user experience
A. Read application configuration and static data from the database on application startup.
B. Package application configuration and static data into the application image during build time.
C. Perform as much work as possible in the background after the response has been returned to the user.
D. Ensure that timeout exceptions and errors cause the Cloud Run instance to exit quickly so a replacement instance can be started.
Correct Answer: C
Selected Answer: B
If images and config info available in the image (within the namespace of the hosting system) then latency is less. Can serve resources easily.
upvoted 1 times
Selected Answer: B
B. Package application configuration and static data into the application image during build time.
By packaging application configuration and static data into the application image during build time, the application can quickly serve requests
without having to make additional requests to a database, thus reducing response time. Additionally, you might consider caching static data in the
application to reduce latency and provide faster responses to user requests, also you could move some of the computation that is not time critical
to be done asynchronously.
upvoted 2 times
Option C "Perform as much work as possible in the background after the response has been returned to the user" could be a good approach, it
allows the user to receive a response quickly, but the background work could take a long time and cause a delay in processing and might not
be acceptable for certain use-cases.
Option D "Ensure that timeout exceptions and errors cause the Cloud Run instance to exit quickly so a replacement instance can be started" this
is good practice and can help ensure that when an instance is having problems, it can be quickly replaced with a new one, but this will not
improve the user experience during traffic peaks, but instead it will minimize the impact of a failed instance on the service's availability.
upvoted 1 times
Selected Answer: B
B is the answer.
https://cloud.google.com/blog/topics/developers-practitioners/3-ways-optimize-cloud-run-response-times
Instead of computing things upon startup, compute them lazily. The initialization of global variables always occurs during startup, which increases
cold start time. Use lazy initialization for infrequently used objects to defer the time cost and decrease cold start times.
upvoted 3 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 290/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Not D - no errors were mentioned, app is only slowing down when traffic spikes
C - process in backgrounds
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 291/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer working on an internal application for payroll processing. You are building a component of the application that allows an
• An email is sent to the employee and manager, notifying them that the timesheet was submitted.
These steps are not dependent on each other and can be completed in any order. New steps are being considered and will be implemented by
different development teams. Each development team will implement the error handling specific to their step. What should you do?
A. Deploy a Cloud Function for each step that calls the corresponding downstream system to complete the required action.
B. Create a Pub/Sub topic for each step. Create a subscription for each downstream development team to subscribe to their step's topic.
C. Create a Pub/Sub topic for timesheet submissions. Create a subscription for each downstream development team to subscribe to the
topic.
D. Create a timesheet microservice deployed to Google Kubernetes Engine. The microservice calls each downstream step and waits for a
Correct Answer: A
Selected Answer: C
Pub/Sub is a messaging service that allows you to decouple microservices and other applications. It is a good choice for this use case because it is
scalable, reliable, and easy to use.
upvoted 1 times
This is a tricky question. The context is in developing team developing the application. So C is the best fit. After the development, when the
application is running then each timesheet submit event can publish 3 events/messages so that 3 independent microservices for each operation
can kick in parallel and perform the tasks.
upvoted 1 times
C is correct
upvoted 1 times
Selected Answer: C
C is the answer.
upvoted 2 times
Selected Answer: C
option c
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 292/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing an application that uses a microservices architecture. You are planning to deploy the application in the cloud and on-premises.
You want to make sure the application can scale up on demand and also use managed services as much as possible. What should you do?
A. Deploy open source Istio in a multi-cluster deployment on multiple Google Kubernetes Engine (GKE) clusters managed by Anthos.
B. Create a GKE cluster in each environment with Anthos, and use Cloud Run for Anthos to deploy your application to each cluster.
C. Install a GKE cluster in each environment with Anthos, and use Cloud Build to create a Deployment for your application in each cluster.
D. Create a GKE cluster in the cloud and install open-source Kubernetes on-premises. Use an external load balancer service to distribute traffic
Correct Answer: B
Selected Answer: B
Selected Answer: B
Anthos supports GKE cluster creation in both On-premises and GCP cloud environments. Cloud run for Anthos supports autoscaling in both the
environments.
upvoted 1 times
B. Create a GKE cluster in each environment with Anthos, and use Cloud Run for Anthos to deploy your application to each cluster.
Using Anthos to manage Kubernetes clusters in both cloud and on-premises environments allows for consistency in deployment and management
across both environments. Deploying the application using Cloud Run for Anthos allows for easy scaling on demand and use of managed services
such as Cloud SQL and Memorystore. Additionally, Cloud Run for Anthos can be deployed to both GKE clusters and on-premises Kubernetes
clusters, allowing for a consistent deployment experience across environments.
upvoted 2 times
B is the answer.
https://cloud.google.com/anthos/run
Integrated with Anthos, Cloud Run for Anthos provides a flexible serverless development platform for hybrid and multicloud environments. Cloud
Run for Anthos is Google's managed and fully supported Knative offering, an open source project that enables serverless workloads on Kubernetes.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 293/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You want to migrate an on-premises container running in Knative to Google Cloud. You need to make sure that the migration doesn't affect your
application's deployment strategy, and you want to use a fully managed service. Which Google Cloud service should you use to deploy your
container?
A. Cloud Run
B. Compute Engine
Correct Answer: A
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/blog/products/serverless/knative-based-cloud-run-services-are-ga
upvoted 2 times
Cloud run
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 294/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
This architectural diagram depicts a system that streams data from thousands of devices. You want to ingest data into a pipeline, store the data,
and analyze the data using SQL statements. Which Google Cloud services should you use for steps 1, 2, 3, and 4?
A. 1. App Engine
2. Pub/Sub
3. BigQuery
4. Firestore
B. 1. Dataflow
2. Pub/Sub
3. Firestore
4. BigQuery
C. 1. Pub/Sub
2. Dataflow
3. BigQuery
4. Firestore
D. 1. Pub/Sub
2. Dataflow
3. Firestore
4. BigQuery
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 295/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
D is the answer.
upvoted 1 times
Selected Answer: D
option D
upvoted 1 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 296/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company just experienced a Google Kubernetes Engine (GKE) API outage due to a zone failure. You want to deploy a highly available GKE
architecture that minimizes service interruption to users in the event of a future zone failure. What should you do?
Correct Answer: A
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Selected Answer: B
B is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#regional_clusters
A regional cluster has multiple replicas of the control plane, running in multiple zones within a given region. Nodes in a regional cluster can run in
multiple zones or a single zone depending on the configured node locations. By default, GKE replicates each node pool across three zones of the
control plane's region. When you create a cluster or when you add a new node pool, you can change the default configuration by specifying the
zone(s) in which the cluster's nodes run. All zones must be within the same region as the control plane.
upvoted 4 times
Selected Answer: B
regional cluster
upvoted 1 times
Selected Answer: B
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 297/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each
message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple
A. Process the messages with a Dataproc job, and write the output to storage.
B. Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.
C. Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.
D. Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to
deduplicate messages.
Correct Answer: B
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
https://cloud.google.com/blog/products/data-analytics/handling-duplicate-data-in-streaming-pipeline-using-pubsub-dataflow
"...because Pub/Sub provides each message with a unique message_id, Dataflow uses it to deduplicate messages by default if you use the built-in
Apache Beam PubSubIO"
upvoted 3 times
Selected Answer: B
https://cloud.google.com/pubsub/docs/stream-messages-dataflow
https://cloud.google.com/community/tutorials/pubsub-spring-dedup-messages
https://cloud.google.com/blog/products/data-analytics/handling-duplicate-data-in-streaming-pipeline-using-pubsub-dataflow
upvoted 1 times
Selected Answer: B
B is the answer.
https://cloud.google.com/dataflow/docs/concepts/streaming-with-cloud-pubsub
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 298/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are running a containerized application on Google Kubernetes Engine. Your container images are stored in Container Registry. Your team uses
CI/CD practices. You need to prevent the deployment of containers with known critical vulnerabilities. What should you do?
• Review your application logs for scan results, and provide an attestation that the container is free of known critical vulnerabilities
• Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
• Review the scan results in the scan details page in the Cloud Console, and provide an attestation that the container is free of known critical
vulnerabilities
• Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
• Review vulnerability reporting in Container Registry in the Cloud Console, and provide an attestation that the container is free of known
critical vulnerabilities
• Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
• Programmatically review vulnerability reporting through the Container Scanning API, and provide an attestation that the container is free of
• Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
Correct Answer: C
Selected Answer: C
i think c is correct
upvoted 2 times
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Answer is D, use the default tools provided by google like container analysis.
upvoted 1 times
D is the answer.
https://cloud.google.com/binary-authorization/docs/creating-attestations-kritis
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 299/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 300/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an on-premises application that authenticates to the Cloud Storage API using a user-managed service account with a user-managed key.
The application connects to Cloud Storage using Private Google Access over a Dedicated Interconnect link. You discover that requests from the
application to access objects in the Cloud Storage bucket are failing with a 403 Permission Denied error code. What is the likely cause of this
issue?
A. The folder structure inside the bucket and object paths have changed.
C. The service account key has been rotated but not updated on the application server.
D. The Interconnect link from the on-premises data center to Google Cloud is experiencing a temporary outage.
Correct Answer: C
Selected Answer: B
The correct option is B. The 403 Permission Denied error code indicates that the service account is authenticated, but it doesn't have sufficient
permissions to access the Cloud Storage bucket. If the error code were 401 Unauthorized, it would suggest that the authentication failed, which
could be caused by a rotated key, as in option C. However, in this case, the error code is 403, which indicates a problem with the permissions of the
service account, making option B the most likely cause.
upvoted 6 times
Selected Answer: B
B is the correct answer. 403 denotes user is authentication but not authorized.
upvoted 1 times
C is correct
upvoted 1 times
The client id/service account key has been updated for the storage bucket but that was not being notified to the client applications or application
server that calls cloud storage bucket.
upvoted 1 times
Selected Answer: C
A user-managed service account authenticates to the Cloud Storage API using a key, which is a unique identifier that proves the identity of the
service account. If the key is rotated, meaning it is replaced with a new one, the application will no longer be able to authenticate using the old key,
resulting in a 403 Permission Denied error. To resolve this issue, the application server must be updated with the new key.
upvoted 2 times
Selected Answer: B
Anwser B with status code 403 => Forbidden so the first authentication is working just the service has not enough permission to access the
document.
upvoted 1 times
I will choose C because the question has a context with account service by file with a key. With this setup, the cause of issue 403 will be key is not
valid anymore after a rotation. For another context with only account service without a key generated, the B is the first check but with a key, you
need to check if the key is valid before searching others causes.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 301/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
The reason for denied access is the reason we get 403. as the question says, do not copy what others are saying , do a research and apply your
knowledge to this if you have any practical knowledge. the answer is B
upvoted 1 times
C. The service account key has been rotated but not updated on the application server.
When a user-managed service account key is rotated in Google Cloud, the new key must also be updated on the application server that
authenticates to the Cloud Storage API using that key. Failure to update the key on the application server will result in requests to the API failing
with a 403 Permission Denied error code.
Option B "The permissions of the service account’s predefined role have changed" would also result in 403 error, but it would be a role issue, not a
key issue.
upvoted 1 times
Selected Answer: B
Answer B
https://cloud.google.com/storage/docs/troubleshooting#access-permission
https://cloud.google.com/appengine/docs/legacy/standard/python/googlecloudstorageclient/errors
https://cloud.google.com/storage/docs/xml-api/reference-status#403%E2%80%94forbidden
upvoted 1 times
You're correct, the 403 "Permission Denied" error can be caused by various reasons, such as an issue with the folder structure inside the bucket
or an issue with the predefined role permissions, but based on the context and the error message it seems that the most likely cause is the
service account key being rotated and not updated on the application server as I mentioned earlier.
Additionally, the links you provided provide more information about the possible causes for 403 error, such as the permissions that are
associated with the object and the bucket, user authentication and role-based access control. Also, it's important to check the Cloud Storage
access logs to determine the cause of the error and take appropriate action.
upvoted 1 times
Selected Answer: C
C is the answer.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 302/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
You are using the Cloud Client Library to upload an image in your application to Cloud Storage. Users of the application report that occasionally
the upload does not complete and the client library reports an HTTP 504 Gateway Timeout error. You want to make the application more resilient
B. Write a one-second wait time backoff process around the client library call.
C. Design a retry button in the application and ask users to click if the error occurs.
D. Create a queue for the object and inform the users that the application will try again in 10 minutes.
Correct Answer: A
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
Exponential back off strategy is a better choice for retry approach. This is for resiliency.
upvoted 1 times
Selected Answer: A
When issuing link to charges on server like 504 uses the exponential-backoff
upvoted 1 times
https://cloud.google.com/storage/docs/retry-strategy#exponential-backoff
Answer A
upvoted 1 times
A is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 303/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to
sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is
expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use?
A. Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account.
B. Use Bigtable, and assign the roles/bigtable.viewer role to the service account.
C. Use Firestore in Native mode and assign the roles/datastore.user role to the service account.
D. Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account.
Correct Answer: A
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
IAM role should be roles/datastore.user role and not a viewer role as of option D. Firestore is suitable for storing semi structured and hierarchical
mobile data.
upvoted 1 times
C. Use Firestore in Native mode and assign the roles/datastore.user role to the service account.
roles/datastore.user role - have permissions to Read/write access to data in a Datastore mode database. Intended for application developers and
service accounts.
https://cloud.google.com/datastore/docs/access/iam
upvoted 1 times
C is the answer.
https://firebase.google.com/docs/firestore/manage-data/enable-offline
Cloud Firestore supports offline data persistence. This feature caches a copy of the Cloud Firestore data that your app is actively using, so your app
can access the data when the device is offline. You can write, read, listen to, and query the cached data. When the device comes back online, Cloud
Firestore synchronizes any local changes made by your app to the Cloud Firestore backend.
upvoted 2 times
option C
upvoted 1 times
Selected Answer: C
https://firebase.google.com/docs/firestore/manage-data/enable-offline
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 304/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is deployed on hundreds of Compute Engine instances in a managed instance group (MIG) in multiple zones. You need to deploy
a new instance template to fix a critical vulnerability immediately but must avoid impact to your service. What setting should be made to the MIG
Correct Answer: C
Selected Answer: D
B. Because the MIG needs to be updated immediately, which not what Opportunistic does
C. Because max unavailable at 100% will cause downtime
If you choose A. The MIG will spin up hundreds of new machines, to replace the existing one, and shutdown the old ones. This is the fastest
method, but could be costly, or you could run into quota issues.
If you choose D, the MIG will spin up 3 VMs at a time (maxSurge default to 3), and then it will bring up one at a time, as soon as more surge slots
are available, so it wont be really that fast.
Selected Answer: D
I vote D.
There are 2 requirements: to deploy the new instance template immediately and to avoid impact. Option D matches the urgency of the issue well
but it also allows to control (minimize) the level of disruption to the service.
https://cloud.google.com/compute/docs/instance-groups/updating-migs#choosing_between_automated_and_selective_updates
upvoted 1 times
Selected Answer: B
When you set the update mode to Opportunistic, the group will continue to serve requests from existing instances while the new instances are
being created and started. Once the new instances are ready, the group will start routing requests to them. The group will continue to serve
requests from both the old and new instances until all of the old instances have been terminated.
upvoted 2 times
The key here is fixing the vulnerability immediately which is not possible with Opportunistic mode.
upvoted 1 times
Selected Answer: D
I think updating "deployed on hundreds of Compute Engine instances" is impossible with Opportunistic mode.
So I agree with D.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 305/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
i go for D
you've updated template and want apply changes immediately
upvoted 1 times
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#minimum_wait_time
Use the minReadySec option to specify the amount of time to wait before considering a new or restarted instance as updated. Use this option to
control the rate at which the automated update is deployed. The timer starts when both of the following conditions are satisfied:
However: minReadySec is only available in the beta Compute Engine API and might be deprecated in a future release.
upvoted 1 times
Selected Answer: D
Setting the "Minimum Wait time" to 0 seconds means that there is no delay in launching the new instances after the instance template is updated,
allowing you to deploy the fix for the critical vulnerability immediately. On the other hand, setting the "Update mode to Opportunistic" would
mean that the new instances are created at an opportune time and may result in a delay in deploying the fix. In this scenario, where a critical
vulnerability needs to be fixed immediately, it's important to deploy the fix as soon as possible, making the "Minimum Wait time to 0 seconds" the
better approach.
upvoted 2 times
https://cloud.google.com/compute/docs/instance-groups/updating-migs#opportunistic_updates
Answer B
upvoted 1 times
Selected Answer: B
B is the answer.
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#type
Alternatively, if an automated update is potentially too disruptive, you can choose to perform an opportunistic update. The MIG applies an
opportunistic update only when you manually initiate the update on selected instances or when new instances are created. New instances can be
created when you or another service, such as an autoscaler, resizes the MIG. Compute Engine does not actively initiate requests to apply
opportunistic updates on existing instances.
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 306/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You made a typo in a low-level Linux configuration file that prevents your Compute Engine instance from booting to a normal run level. You just
created the Compute Engine instance today and have done no other maintenance on it, other than tweaking files. How should you correct this
error?
A. Download the file using scp, change the file, and then upload the modified version
B. Configure and log in to the Compute Engine instance through SSH, and change the file
C. Configure and log in to the Compute Engine instance through the serial port, and change the file
D. Configure and log in to the Compute Engine instance using a remote desktop client, and change the file
Correct Answer: B
Selected Answer: C
If booting issue with Compute engine instance, then serial port access is one of the solution. SSH, RDP and SCP are not possible.
upvoted 2 times
Selected Answer: C
According to the explanation "prevents your Compute Engine instance from booting to a normal run level".
So I think sshd deamon has not launched yet and you can't use ssh.
I can't think of a correct answer to anything other than C.
upvoted 1 times
Selected Answer: B
The correct answer is B: Configure and log in to the Compute Engine instance through SSH, and change the file.
This is the recommended method to make changes to a Linux configuration file on a Compute Engine instance. SSH allows secure remote access to
the instance's command line interface, and it is designed to enable you to make changes to the instance's configuration files.
Option A, downloading and uploading the modified version of the file, is not the recommended method as it requires more steps and can
introduce errors.
Option C, using the serial port, may be used in some cases, but it is not the recommended method as it requires more steps and can be more
complex.
Option D, using a remote desktop client, is not applicable as Linux instances on Compute Engine do not come with a graphical user interface (GUI)
by default.
upvoted 1 times
C is the answer.
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-using-serial-console
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 307/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application that needs to store files belonging to users in Cloud Storage. You want each user to have their own subdirectory
in Cloud Storage. When a new user is created, the corresponding empty subdirectory should also be created. What should you do?
A. Create an object with the name of the subdirectory ending with a trailing slash ('/') that is zero bytes in length.
B. Create an object with the name of the subdirectory, and then immediately delete the object within that subdirectory.
C. Create an object with the name of the subdirectory that is zero bytes in length and has WRITER access control list permission.
D. Create an object with the name of the subdirectory that is zero bytes in length. Set the Content-Type metadata to CLOUDSTORAGE_FOLDER.
Correct Answer: A
Selected Answer: A
A is the answer.
https://cloud.google.com/storage/docs/folders
If you create an empty folder using the Google Cloud console, Cloud Storage creates a zero-byte object as a placeholder. For example, if you create
a folder called folder in a bucket called my-bucket, a zero- byte object called gs://my-bucket/folder/ is created. This placeholder is discoverable by
other tools when listing the objects in the bucket, for example when using the gsutil ls command.
upvoted 6 times
Selected Answer: A
When you create an object with the name of the subdirectory ending with a trailing slash, Cloud Storage will treat the object as a subdirectory. This
means that you can then store other objects in the subdirectory.
upvoted 1 times
I go with C. WRITER permission and folder with zero bytes size is correct.
upvoted 1 times
Selected Answer: C
https://cloud.google.com/storage/docs/folders#overview
According to the explanation upper URL, " Cloud Storage operates with a flat namespace, which means that folders don't actually exist within
Cloud Storage. ".
That's right. You can't create the state "foo/".
This is an actual experience using Cloud Storage.
Therefore, I think the correct answer is C.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 308/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company’s corporate policy states that there must be a copyright comment at the very beginning of all source files. You want to write a
custom step in Cloud Build that is triggered by each source commit. You need the trigger to validate that the source contains a copyright and add
one for subsequent steps if not there. What should you do?
A. Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed
B. Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed
C. Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file.
D. Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file.
Correct Answer: C
Selected Answer: A
If a company policy states that every source code should have the copyright comment at the beginning of each file then for every build, we need
to scan for each source code file and generate the copyright comments if not there, commit the updated files back to the repository. This is like a
prebuild check.
upvoted 1 times
Selected Answer: A
A. Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files are
explicitly committed back to the source repository.
This option would allow you to create a custom step in Cloud Build that is triggered by each source commit, which would examine the source files
in the /workspace directory, check for the presence of a copyright comment, and add one if not present. By committing the changed files back to
the source repository, you ensure that the updated files with the added copyright comment are properly tracked and stored in the source control
system.
upvoted 1 times
the code changes must be put back in the workplace folder or the sub other sub-step won't have the changes.
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/build/docs/configuring-builds/pass-data-between-steps#passing_data_using_workspaces
To pass data between build steps, store the assets produced by the build step in /workspace and these assets will be available to any subsequent
build steps.
upvoted 3 times
Selected Answer: B
B is the answer.
https://cloud.google.com/build/docs/configuring-builds/pass-data-between-steps#passing_data_using_workspaces
To pass data between build steps, store the assets produced by the build step in /workspace and these assets will be available to any subsequent
build steps.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 309/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
One of your deployed applications in Google Kubernetes Engine (GKE) is having intermittent performance issues. Your team uses a third-party
logging solution. You want to install this solution on each node in your GKE cluster so you can view the logs. What should you do?
C. Use SSH to connect to the GKE node, and install the software manually
D. Deploy the third-party solution using Terraform and deploy the logging Pod as a Kubernetes Deployment
Correct Answer: A
Selected Answer: A
Selected Answer: A
A is best suitable than D here. D is complicated with Terraform and so on. Another solution would be to deploy the third party logging solution as a
sidecar container with the main application.
upvoted 1 times
Selected Answer: A
A is the answer, it's the use of daemonSet to ensures that a specific Pod is always running on all or some subset of the nodes
upvoted 1 times
A is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user
intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluent-bit, and node monitoring daemons like
collectd.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 310/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the
subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
clear uptime data, and that they analyze and respond to any issues that occur.
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 311/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users?
A. Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node.
B. Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute
the load between instances. Use managed instance groups for scaling.
C. Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to
distribute the load between instances. Use managed instance groups for scaling.
D. Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state
information.
Correct Answer: D
Selected Answer: C
C is correct.
upvoted 2 times
Selected Answer: C
C is the answer.
upvoted 1 times
C is the answer.
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 312/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
A. Because running MySQL inside GKE is not a GCP Best practice (there is CloudSQL)
B. Running MySQL manually on CE instances is not best practice (there is CloudSQL)
D. State information does not belong in cloud storage
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 313/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the
subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
clear uptime data, and that they analyze and respond to any issues that occur.
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 314/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets
feature requirements?
A. Include unit tests in their code, and prevent deployments to QA until all tests have a passing status.
B. Include performance tests in their code, and prevent deployments to QA until all tests have a passing status.
C. Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy.
D. Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.
Correct Answer: B
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
The answer must be A. Dev team's responsibility is to make sure all unit tests are passed before saying code is ready for testing.
Note: GCP PCD questions are really ridiculous. There are many ways to ask this question in a very simple way. I guess the questions are being
prepared by a non-technical staff of GCP.
upvoted 2 times
Selected Answer: A
Answer A
upvoted 1 times
A is the answer.
upvoted 1 times
A stable environment is one that works. Performance testing does not mean it works fine, unit testing will enable this.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 315/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 316/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the
subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
clear uptime data, and that they analyze and respond to any issues that occur.
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 317/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in
the Cloud Client Libraries to implement least privileged access for the application. What should they do?
A. Create an API key. Use the API key to interact with Google Cloud.
B. Use the default compute service account to interact with Google Cloud.
C. Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with
Google Cloud.
D. Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys
used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.
Correct Answer: A
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
B is easily eliminated.
A is not that much secure. Provides only authorization and not authentication. There is no IAM here.
D is more complex and not necessary to create service account for every API within the application.
upvoted 1 times
Selected Answer: C
Answer C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 318/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Answer C
upvoted 1 times
C is the answer.
upvoted 1 times
Still, ideally you should not copy SA keys around. Most of the time, GCP gives you a way to associate a service account with a workload.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 319/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are in the final stage of migrating an on-premises data center to Google Cloud. You are quickly approaching your deadline, and discover that a
web API is running on a server slated for decommissioning. You need to recommend a solution to modernize this API while migrating to Google
Cloud. The modernized web API must meet the following requirements:
• Developers must be able to rapidly deploy new versions in response to frequent code changes
You want to minimize cost, effort, and operational overhead of this migration. What should you do?
D. Ask the development team to re-write the application to run as a Docker container on Google Kubernetes Engine.
Correct Answer: C
Selected Answer: A
App Engine flexible environment is a fully managed platform for running Python 3.x applications. It autoscales during high traffic periods and can
be rapidly deployed using the App Engine SDK or the App Engine gcloud command-line tool. Additionally, App Engine flexible environment is a
cost-effective solution, as you only pay for the resources that you use.
upvoted 1 times
The GAE Standard is better as it supports Python 3.x already and it's a cheaper solution. GAE Flexible doesn't scale down to 0 and it will always
have at least 1 instance running.
upvoted 1 times
Selected Answer: B
A. App engine flexible cannot scale down to 0, thus not minimizes the cost
C. Deploying to a single VM will not allow autoscaling
D. Running in a GKE cluster will not minimize the cost
Selected Answer: B
B is the answer.
https://cloud.google.com/appengine/docs/standard
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 320/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application that consists of several microservices running in a Google Kubernetes Engine cluster. One microservice needs
to connect to a third-party database running on-premises. You need to store credentials to the database and ensure that these credentials can be
rotated while following security best practices. What should you do?
A. Store the credentials in a sidecar container proxy, and use it to connect to the third-party database.
B. Configure a service mesh to allow or restrict traffic from the Pods in your microservice to the database.
C. Store the credentials in an encrypted volume mount, and associate a Persistent Volume Claim with the client Pod.
D. Store the credentials as a Kubernetes Secret, and use the Cloud Key Management Service plugin to handle encryption and decryption.
Correct Answer: A
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Storing credentials as a Kubernetes secret + KMS for encryption and decryption of the DB credentials are the best answer.
upvoted 1 times
Selected Answer: D
Storing sensitive information such as database credentials in Kubernetes Secrets is a common and secure way to manage sensitive information in a
cluster. The Cloud Key Management Service (KMS) can be used to further protect the secrets by encrypting and decrypting them, ensuring that
they are protected both at rest and in transit. This combination of Kubernetes Secrets and Cloud KMS provides a secure way to manage and rotate
credentials while following security best practices.
Options A and B are not recommended, as they do not provide a secure and centralized way to manage and rotate credentials. Option C is not
recommended because storing secrets in an encrypted volume mount is not as secure as using a Key Management Service, as the encryption keys
must still be managed and protected within the cluster.
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets
By default, Google Kubernetes Engine (GKE) encrypts customer content stored at rest, including Secrets. GKE handles and manages this default
encryption for you without any additional action on your part.
Application-layer secrets encryption provides an additional layer of security for sensitive data, such as Secrets, stored in etcd. Using this
functionality, you can use a key managed with Cloud KMS to encrypt data at the application layer. This encryption protects against attackers who
gain access to an offline copy of etcd.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 321/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You manage your company's ecommerce platform's payment system, which runs on Google Cloud. Your company must retain user logs for 1 year
for internal auditing purposes and for 3 years to meet compliance requirements. You need to store new user logs on Google Cloud to minimize on-
premises storage usage and ensure that they are easily searchable. You want to minimize effort while ensuring that the logs are stored correctly.
A. Store the logs in a Cloud Storage bucket with bucket lock turned on.
B. Store the logs in a Cloud Storage bucket with a 3-year retention period.
C. Store the logs in Cloud Logging as custom logs with a custom retention period.
D. Store the logs in a Cloud Storage bucket with a 1-year retention period. After 1 year, move the logs to another bucket with a 2-year retention
period.
Correct Answer: C
Selected Answer: C
The requirements say that the logs should be easily searchable. This is not easily achieved in Cloud Storage, so that eliminates A,B and D.
Note, that it's possible to configure Cloud Logging with a custom retention period.
https://cloud.google.com/logging/docs/buckets#custom-retention
upvoted 8 times
Selected Answer: C
C is correct.
upvoted 1 times
Tricky question.
Easily searchable is the key here.
Cloud logging supports retaining the logs between 1 to 3650 (10 years max) and we can set custom retention period on the cloud logs.
upvoted 1 times
C is the correct answer because Cloud Logging to retain logs between 1 day and 3650 days
upvoted 1 times
https://cloud.google.com/logging/docs/routing/overview#logs-retention
Cloud Logging retains logs according to retention rules applying to the log bucket type where the logs are held.
You can configure Cloud Logging to retain logs between 1 day and 3650 days. Custom retention rules apply to all the logs in a bucket, regardless
of the log type or whether that log has been copied from another location.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 322/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
B is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 323/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company has a new security initiative that requires all data stored in Google Cloud to be encrypted by customer-managed encryption keys.
You plan to use Cloud Key Management Service (KMS) to configure access to the keys. You need to follow the "separation of duties" principle and
C. Provision Cloud KMS in the project where the keys are being used.
D. Grant the roles/cloudkms.admin role to the owner of the project where the keys from Cloud KMS are being used.
E. Grant an owner role for the Cloud KMS project to a different user than the owner of the project where the keys from Cloud KMS are being
used.
Correct Answer: AE
Selected Answer: AB
https://cloud.google.com/kms/docs/separation-of-duties#using_separate_project
Instead, to allow for a separation of duties, you could run Cloud KMS in its own project, for example your-key-project. Then, depending on the
strictness of your separation requirements, you could either:
- (recommended) Create your-key-project without an owner at the project level, and designate an Organization Admin granted at the
organization-level. Unlike an owner, an Organization Admin can't manage or use keys directly. They are restricted to setting IAM policies, which
restrict who can manage and use keys. Using an organization-level node, you can further restrict permissions for projects in your organization.
upvoted 5 times
Selected Answer: AB
AB is correct.
upvoted 1 times
Selected Answer: AE
A. Provision Cloud KMS in its own project - this helps to ensure that the management of encryption keys is isolated and separate from other
projects in your Google Cloud organization.
E. Grant an owner role for the Cloud KMS project to a different user than the owner of the project where the keys from Cloud KMS are being used -
this follows the "separation of duties" principle and helps to ensure that the management of encryption keys is not tied to the project where the
keys are being used.
upvoted 2 times
Selected Answer: AB
Answer A, B
upvoted 2 times
Selected Answer: AB
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 324/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/kms/docs/separation-of-duties#using_separate_project
(recommended) Create your-key-project without an owner at the project level, and designate an Organization Admin granted at the
organization-level. Unlike an owner, an Organization Admin can't manage or use keys directly. They are restricted to setting IAM policies, which
restrict who can manage and use keys. Using an organization-level node, you can further restrict permissions for projects in your organization.
upvoted 3 times
AE is the answer.
https://cloud.google.com/kms/docs/separation-of-duties#using_separate_project
Cloud KMS could be run in an existing project, for example your-project, and this might be sensible if the data being encrypted with keys in Cloud
KMS is stored in the same project.
However, any user with owner access on that project is then also able to manage (and perform cryptographic operations with) keys in Cloud KMS
in that project. This is because the keys themselves are owned by the project, of which the user is an owner.
Instead, to allow for a separation of duties, you could run Cloud KMS in its own project, for example your-key-project.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 325/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to migrate a standalone Java application running in an on-premises Linux virtual machine (VM) to Google Cloud in a cost-effective
manner. You decide not to take the lift-and-shift approach, and instead you plan to modernize the application by converting it to a container. How
A. Use Migrate for Anthos to migrate the VM to your Google Kubernetes Engine (GKE) cluster as a container.
B. Export the VM as a raw disk and import it as an image. Create a Compute Engine instance from the Imported image.
C. Use Migrate for Compute Engine to migrate the VM to a Compute Engine instance, and use Cloud Build to convert it to a container.
D. Use Jib to build a Docker image from your source code, and upload it to Artifact Registry. Deploy the application in a GKE cluster, and test
the application.
Correct Answer: A
Selected Answer: D
Jib is a tool that builds Docker images from Java code without the need for a Dockerfile. This makes it easy to containerize Java applications, even if
you don't have any experience with Docker.
upvoted 1 times
Selected Answer: D
Answer D
https://cloud.google.com/blog/products/application-development/introducing-jib-build-java-docker-images-better
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/blog/products/application-development/introducing-jib-build-java-docker-images-better
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 326/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your organization has recently begun an initiative to replatform their legacy applications onto Google Kubernetes Engine. You need to decompose
a monolithic application into microservices. Multiple instances have read and write access to a configuration file, which is stored on a shared file
system. You want to minimize the effort required to manage this transition, and you want to avoid rewriting the application code. What should you
do?
A. Create a new Cloud Storage bucket, and mount it via FUSE in the container.
B. Create a new persistent disk, and mount the volume as a shared PersistentVolume.
C. Create a new Filestore instance, and mount the volume as an NFS PersistentVolume.
D. Create a new ConfigMap and volumeMount to store the contents of the configuration file.
Correct Answer: A
Selected Answer: C
A is incorrect, because Cloud Storage FUSE does not support concurrency and file locking.
B is incorrect, because a persistent disk PersistentVolume is not read-write-many. It can only be read-write once or read-many.
C is correct, because it’s the only managed, supported read-write-many storage option available for file-system access in Google Kubernetes
Engine.
D is incorrect, because the ConfigMap cannot be written to from the Pods.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
https://cloud.google.com/filestore/docs/accessing-fileshares
https://cloud.google.com/storage/docs/gcs-fuse
upvoted 7 times
Selected Answer: D
ConfigMaps are Kubernetes objects that allow you to store configuration data in a key-value format. ConfigMaps are immutable, which means that
they cannot be changed once they are created. This makes them ideal for storing configuration data that needs to be shared between multiple
Pods.
upvoted 1 times
Selected Answer: D
ConfigMap is the usual way to store application configurations those runs under the cluster. I donot understand why many of you are saying C. The
question is asking us to how we are going to manage configuration data in a GKE environment.
upvoted 1 times
Selected Answer: C
https://kubernetes.io/docs/concepts/storage/volumes/#nfs
An nfs volume allows an existing NFS (Network File System) share to be mounted into a Pod. Unlike emptyDir, which is erased when a Pod is
removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated
with data, and that data can be shared between pods. NFS can be mounted by multiple writers simultaneously.
upvoted 1 times
Selected Answer: C
Answer C
An nfs volume allows an existing NFS (Network File System) share to be mounted into a Pod. Unlike emptyDir, which is erased when a Pod is
removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated
with data, and that data can be shared between pods
upvoted 2 times
Selected Answer: C
Generally ConfigMaps (D) are the right choice to store pod config-files, but: They are read-only, which does not match what is asked for here. If, as
stated in the question, the application-parts need to be able to also write to that Configfile that should be on a shared file system, the only valid
choice is a NFS PV.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 327/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
D is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/configmap
ConfigMaps bind non-sensitive configuration artifacts such as configuration files, command-line arguments, and environment variables to your Pod
containers and system components at runtime.
A ConfigMap separates your configurations from your Pod and components, which helps keep your workloads portable. This makes their
configurations easier to change and manage, and prevents hardcoding configuration data to Pod specifications.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 328/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your development team has built several Cloud Functions using Java along with corresponding integration and service tests. You are building and
deploying the functions and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment failures immediately after
B. Verify that your Cloud Build trigger has the correct build parameters.
C. Retry the tests using the truncated exponential backoff polling strategy.
D. Verify that the Cloud Build service account is assigned the Cloud Functions Developer role.
Correct Answer: C
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Cloud Build service account must have the IAM developer role in order to deploy cloud functions.
upvoted 1 times
Selected Answer: D
https://cloud.google.com/build/docs/securing-builds/configure-access-for-cloud-build-service-account#granting_a_role_using_the_iam_page
https://cloud.google.com/build/docs/troubleshooting#build_trigger_fails_due_to_missing_cloudbuildbuildscreate_permission
upvoted 1 times
Selected Answer: D
Answer D
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/build/docs/securing-builds/configure-access-for-cloud-build-service-account
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 329/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You manage a microservices application on Google Kubernetes Engine (GKE) using Istio. You secure the communication channels between your
microservices by implementing an Istio AuthorizationPolicy, a Kubernetes NetworkPolicy, and mTLS on your GKE cluster. You discover that HTTP
requests between two Pods to specific URLs fail, while other requests to other URLs succeed. What is the cause of the connection issue?
B. The Pod initiating the HTTP requests is attempting to connect to the target Pod via an incorrect TCP port.
C. The Authorization Policy of your cluster is blocking HTTP requests for specific paths within your application.
D. The cluster has mTLS configured in permissive mode, but the Pod's sidecar proxy is sending unencrypted traffic in plain text.
Correct Answer: D
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
Key here is "HTTP requests between two Pods to specific URLs fail", this means no auth rule set for these urls in the Istio configuration.
upvoted 1 times
Selected Answer: C
A is not correct because Kubernetes NetworkPolicy resources allow you to block HTTP traffic between groups of pods but not for selected paths.
(https://kubernetes.io/docs/concepts/services-networking/network-policies/).
B is not correct because if the client pod is using an incorrect port to communicate with the server, pod requests will time out for all URL paths.
C is correct because an Istio Authorization policy allows you to block HTTP methods between pods for specific URL paths
(https://istio.io/latest/docs/tasks/security/authorization/authz-http/).
D is not correct because mTLS configuration using Istio should not cause HTTP requests to fail. In permissive mode (default configuration), a
service can accept both plain text and mTLS encrypted traffic (https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/).
upvoted 3 times
Selected Answer: C
https://cloud.google.com/service-mesh/docs/troubleshooting/troubleshoot-security#authorization_policy_denial_logging
Answer C
https://istio.io/latest/docs/ops/common-problems/network-issues/#sending-https-to-an-http-port
upvoted 1 times
Selected Answer: C
C is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 330/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently migrated an on-premises monolithic application to a microservices application on Google Kubernetes Engine (GKE). The application
has dependencies on backend services on-premises, including a CRM system and a MySQL database that contains personally identifiable
information (PII). The backend services must remain on-premises to meet regulatory requirements.
You established a Cloud VPN connection between your on-premises data center and Google Cloud. You notice that some requests from your
microservices application on GKE to the backend services are failing due to latency issues caused by fluctuating bandwidth, which is causing the
A. Use Memorystore to cache frequently accessed PII data from the on-premises MySQL database
B. Use Istio to create a service mesh that includes the microservices on GKE and the on-premises services
C. Increase the number of Cloud VPN tunnels for the connection between Google Cloud and the on-premises services
D. Decrease the network layer packet size by decreasing the Maximum Transmission Unit (MTU) value from its default value on Cloud VPN
Correct Answer: A
Selected Answer: B
To use Istio to reduce latency in your microservices application, you would create a service mesh that includes the microservices on GKE and the
on-premises services. Istio would then manage traffic between the microservices and the on-premises services, and would use its features to
reduce latency.
upvoted 1 times
Selected Answer: C
Cloud Interconnect would be better option than C, I guess. Increase the VPN tunnels would provide the required bandwidth for the GCP and On-
Premises services to communicate.
upvoted 1 times
Istio can help to address the latency issues by creating a service mesh that allows you to control the flow of traffic between the microservices on
GKE and the on-premises services. This can allow you to monitor and manage the traffic, as well as implement features such as load balancing and
circuit breaking to help mitigate the impact of latency on the application. It also allows to increase the number of Cloud VPN tunnels for the
connection between Google Cloud and the on-premises services, but it is not the best approach. Increasing the number of tunnels can help to
increase the available bandwidth, but it does not address the underlying issues causing the latency. Decreasing the network layer packet size by
decreasing the MTU value on Cloud VPN can cause fragmentation, which can increase latency, so it is not a good approach. Caching of PII data can
be a good practice but it does not address the latency issues caused by fluctuating bandwidth.
upvoted 2 times
Selected Answer: C
https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies#more-bandwidth
To increase the bandwidth of your HA VPN gateways, add more HA VPN tunnels.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 331/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: C
GCP does support having multiple VPN Tunnels on the same gateway
https://cloud.google.com/network-connectivity/docs/vpn/concepts/choosing-networks-routing#route-alignment
upvoted 4 times
https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies#more-bandwidth
To increase the bandwidth of your HA VPN gateways, add more HA VPN tunnels.
upvoted 1 times
A is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 332/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company has deployed a new API to a Compute Engine instance. During testing, the API is not behaving as expected. You want to monitor
the application over 12 hours to diagnose the problem within the application code without redeploying the application. Which tool should you use?
A. Cloud Trace
B. Cloud Monitoring
Correct Answer: B
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
Cloud Debugger Logpoints: Best suitable for Prod env where we no need to change the code and redeploy after adding log statements.
upvoted 1 times
Selected Answer: C
Selected Answer: C
Answer is C because within code and without changes on code so it eliminate the others choice.
upvoted 1 times
Selected Answer: C
C is the answer.
https://cloud.google.com/debugger/docs/using/logpoints
After you have deployed or started your application, you can open Cloud Debugger in the Google Cloud console. Cloud Debugger allows you to
inject logging into running services without restarting or interfering with the normal function of the service. This can be useful for debugging
production issues without having to add log statements and redeploy.
upvoted 1 times
Selected Answer: C
the answer is D: can add without redeploying or changing code https://cloud.google.com/debugger/docs/using/logpoints Logpoints allow you to
inject logging into running services without restarting or interfering with the normal function of the service
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 333/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing an application that consists of several microservices. Each microservice has its own RESTful API and will be deployed as a
separate Kubernetes Service. You want to ensure that the consumers of these APIs aren't impacted when there is a change to your API, and also
ensure that third-party systems aren't interrupted when new versions of the API are released. How should you configure the connection to the
A. Use an Ingress that uses the API's URL to route requests to the appropriate backend.
B. Leverage a Service Discovery system, and connect to the backend specified by the request.
C. Use multiple clusters, and use DNS entries to route requests to separate versioned backends.
D. Combine multiple versions in the same service, and then specify the API version in the POST request.
Correct Answer: C
Selected Answer: B
This approach is recommended by Google because it allows you to decouple the consumers of your APIs from the specific backend services that
are providing those APIs. This makes it easier to scale your application and to make changes to your APIs without impacting the consumers.
upvoted 1 times
Ingress or Nginx service that routes ( reverse proxy ) to the appropriate urls is a best solution.
upvoted 1 times
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#deprecated_annotation
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#features_of_https_load_balancing
Answer A
upvoted 3 times
B. Service discovery only works within the cluster itself, so external clients can't use it
C. Using multiple clusters is an overkill, you can deploy multiple version of the same service within a single cluster
D. Passing the API version in the request body is not a REST best practice
The best practice is to pass the version of the API in the the URL path, e.g /v1/foo, /v2/foo
Using this approach, you can route requests to the appropriate backend service within the GKE cluster using an Ingress resource, which is option A.
upvoted 3 times
D is the answer.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 334/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is building an application for a financial institution. The application's frontend runs on Compute Engine, and the data resides in Cloud
SQL and one Cloud Storage bucket. The application will collect data containing PII, which will be stored in the Cloud SQL database and the Cloud
Storage bucket. You need to secure the PII data. What should you do?
A. 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database
2. Using IAM, allow only the frontend service account to access the Cloud Storage bucket
B. 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database
2. Enable private access to allow the frontend to access the Cloud Storage bucket privately
3. Add the Cloud SQL database and the Cloud Storage bucket to the same service perimeter
3. Add the Cloud SQL database and the Cloud Storage bucket to different service perimeters
Correct Answer: B
Selected Answer: C
Without using VPC-SC, the PII data is not secure from exfiltration. So that leaves only C, and D as possible valid responses. However, D can be
eliminated because both the Cloud SQL instance and and Cloud Storage bucket must be within the same perimeter, which leaves C and the valid
answer.
upvoted 5 times
Selected Answer: C
I would go with C
upvoted 1 times
Selected Answer: C
Selected Answer: C
Answer C
upvoted 3 times
B is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 335/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Since the correct answer is C, I cannot understand why this site suggests B as the correct answer. Most of proposed answers by
examtopics.com are wrong ... so why are we paying a subscription?
I was expecting something more accurate.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 336/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are designing a chat room application that will host multiple rooms and retain the message history for each room. You have selected Firestore
A. Create a collection for the rooms. For each room, create a document that lists the contents of the messages
B. Create a collection for the rooms. For each room, create a collection that contains a document for each message
C. Create a collection for the rooms. For each room, create a document that contains a collection for documents, each of which contains a
message.
D. Create a collection for the rooms, and create a document for each room. Create a separate collection for messages, with one document per
Correct Answer: C
Selected Answer: C
Community answer C.
upvoted 1 times
Selected Answer: C
C is the answer.
https://firebase.google.com/docs/firestore/data-model#hierarchical-data
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 337/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: C
Answer is C. "The best way to store messages in this scenario is by using subcollections. A subcollection is a collection associated with a specific
document."
https://firebase.google.com/docs/firestore/data-model#subcollections
upvoted 1 times
You are developing an application that will handle requests from end users. You need to secure a Cloud Function called by the application to allow
authorized end users to authenticate to the function via the application while restricting access to unauthorized users. You will integrate Google
Sign-In as part of the solution and want to follow Google-recommended best practices. What should you do?
A. Deploy from a source code repository and grant users the roles/cloudfunctions.viewer role.
B. Deploy from a source code repository and grant users the roles/cloudfunctions.invoker role
C. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.admin role
D. Deploy from your local machine using gcloud and grant users the roles/cloudfunctions.developer role
Correct Answer: C
The key here is "secure a Cloud Function CALLED by the application to allow authorized end users to authenticate to the function via the
application while restricting access to unauthorized users"
upvoted 1 times
Selected Answer: B
B is the answer.
upvoted 1 times
Selected Answer: B
Have the user account you are using to access Cloud Functions assigned a role that contains the cloudfunctions.functions.invoke permission. By
default, the Cloud Functions Admin and Cloud Functions Developer roles have this permission. See Cloud Functions IAM Roles for the full list of
roles and their associated permissions.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 338/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are running a web application on Google Kubernetes Engine that you inherited. You want to determine whether the application is using
libraries with known vulnerabilities or is vulnerable to XSS attacks. Which service should you use?
B. Debugger
D. Error Reporting
Correct Answer: C
Selected Answer: C
Web security scanner under gcp environment or we can use GRYPE for vulnerability scanning in on premise networks.
upvoted 1 times
Answer C
upvoted 1 times
Selected Answer: C
C is the answer.
https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview
Web Security Scanner identifies security vulnerabilities in your App Engine, Google Kubernetes Engine (GKE), and Compute Engine web
applications. It crawls your application, following all links within the scope of your starting URLs, and attempts to exercise as many user inputs and
event handlers as possible.
upvoted 2 times
https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-overview
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 339/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage
and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do?
A. 1. Create a managed instance group. Replicate the static content across the virtual machines (VMs)
3. Enable Cloud CDN, and send traffic to the managed instance group.
B. 1. Create an unmanaged instance group. Replicate the static content across the VMs.
3. Enable Cloud CDN, and send traffic to the unmanaged instance group.
C. 1. Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket
D. 1. Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket.
Correct Answer: B
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Selected Answer: D
D is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 340/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the
subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
clear uptime data, and that they analyze and respond to any issues that occur.
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 341/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations
where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database
A. Migrate the database to Bigtable and use it to serve all global user traffic.
B. Migrate the database to Cloud Spanner and use it to serve all global user traffic.
C. Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic.
D. Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.
Correct Answer: D
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Selected Answer: B
B is the answer.
upvoted 1 times
While the question asks for "least amount of effort" ... all possible answers require a database migration. So it really boils down, to which database
will be easiest to migrate to.
HipLocal is using MySQL, which is a Relational database, so that rules out all options that are not relational ... leaving option B, Cloud Spanner as
the only valid option.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 342/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Also, option D is completely unrelated. There is no point in migrated services to Kubernetes if you what you are after is improving the database
latency. Maybe, if they put the services closer to the database it would help, but option D does not say anything about that.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 343/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the
subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
clear uptime data, and that they analyze and respond to any issues that occur.
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 344/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
Which Google Cloud product addresses HipLocal’s business requirements for service level indicators and objectives?
A. Cloud Profiler
B. Cloud Monitoring
C. Cloud Trace
D. Cloud Logging
Correct Answer: B
Selected Answer: B
Selected Answer: B
Answer B
upvoted 1 times
B is the answer.
https://cloud.google.com/stackdriver/docs/solutions/slo-monitoring
upvoted 1 times
Answer is B
https://cloud.google.com/stackdriver/docs/solutions/slo-monitoring#defn-sli
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 345/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the
subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
clear uptime data, and that they analyze and respond to any issues that occur.
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 346/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text
on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?
A. Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database
credentials.
B. Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to
C. Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud
SQL Proxy.
D. Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with
Correct Answer: C
Selected Answer: D
Secret Manager is a service that helps you manage and protect your secrets. You can store secrets such as passwords, API keys, and SSH keys in
Secret Manager. Secret Manager encrypts your secrets using Google-managed encryption keys, and it provides you with a number of features to
help you manage and protect your secrets.
upvoted 1 times
Selected Answer: D
Selected Answer: D
D is the answer.
upvoted 1 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 347/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Both A, and B go against best practices that say you should try avoiding service account keys. Plus, these two answers would still store the service
account key in the VM.
Option C is completely irrelevant as it does not address the issue at hand, which is plain text credentials stored on disk.
https://cloud.google.com/secret-manager/docs/overview
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 348/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However,
there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might
contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to
the next section of the exam. After you begin a new section, you cannot return to this section.
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study
before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem
statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the
subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and
organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in
Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid
growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to
hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides
clear uptime data, and that they analyze and respond to any issues that occur.
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands
their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• Applications are manually deployed by infrastructure engineers during periods of slow traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 349/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted applications and infrastructure.
• Logging and performance metrics must provide actionable information to be able to provide debugging information and alerts.
HipLocal is expanding into new locations. They must capture additional data each time the application is launched in a new European country.
This is causing delays in the development process due to constant schema changes and a lack of environments for conducting testing on the
application changes. How should they resolve the issue while meeting the business requirements?
A. Create new Cloud SQL instances in Europe and North America for testing and deployment. Provide developers with local MySQL instances
B. Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to emulate a local Bigtable development environment.
C. Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across regions in the Americas and Europe. Provide
developers with local MySQL instances to conduct testing on the application changes.
D. Migrate data to Firestore in Native mode and set up instances in Europe and North America. Instruct the development teams to use the
Correct Answer: B
Selected Answer: D
Selected Answer: D
D is the answer.
upvoted 2 times
Selected Answer: D
1. It's a Document store, without strict schema enforcement so it's good for "constant schema changes"
2. You can setup separate instances in NA, and EU to satisfy GDRP
3. Dev teams can emulate Firestore locally for testing.
4. It's a managed service, so reduces infra management time and cost
Option C is a non-starter, as it's moving from managed service to non-managed, also it replicates data between EU and NA, which is against GDPR
Option B could work, but Bigtable is an overkill for HipLocal, it costs more than Firestore.
Option A does not reduce infrastructure management, as they need to provide local MySQL instances for developers.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 350/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 351/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are writing from a Go application to a Cloud Spanner database. You want to optimize your application’s performance using Google-
Correct Answer: C
Selected Answer: A
A is correct
upvoted 1 times
Selected Answer: A
A is corrrect.
upvoted 1 times
Selected Answer: A
https://cloud.google.com/apis/docs/cloud-client-libraries
upvoted 1 times
Selected Answer: A
option A
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/spanner/docs/reference/libraries
upvoted 2 times
Selected Answer: A
A is correct
BC are part of A
D idk
“Cloud Client Libraries are the recommended option for accessing Cloud APIs programmatically, where available. Cloud Client Libraries use the
latest client library models”
https://cloud.google.com/apis/docs/client-libraries-explained
https://cloud.google.com/go/docs/reference
upvoted 1 times
∴B is wrong.
upvoted 1 times
You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google
Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys
and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should
you do next?
A. Assign the Google Cloud service account to your GKE Pod using Workload Identity.
B. Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret.
C. Export the Google Cloud service account, and embed it in the source code of the application.
D. Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.
Correct Answer: B
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
Selected Answer: A
https://cloud.google.com/iam/docs/best-practices-service-accounts#use-workload-identity
upvoted 2 times
Selected Answer: A
workload identity
upvoted 1 times
Selected Answer: A
option A
upvoted 1 times
A is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
Applications running on GKE might need access to Google Cloud APIs such as Compute Engine API, BigQuery Storage API, or Machine Learning
APIs.
Workload Identity allows a Kubernetes service account in your GKE cluster to act as an IAM service account. Pods that use the configured
Kubernetes service account automatically authenticate as the IAM service account when accessing Google Cloud APIs. Using Workload Identity
allows you to assign distinct, fine-grained identities and authorization for each application in your cluster.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 353/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication
A. Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services
B. Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh
D. Install Anthos Service Mesh, and enable mTLS in your Service Mesh.
Correct Answer: B
Selected Answer: B
I will go with B.
upvoted 1 times
Selected Answer: D
Initially I thought B can be the option. Later realized that directly using Istio on GKE cluster is deprecated. Hence going for D.
upvoted 1 times
Selected Answer: B
Google Cloud provides a service called Istio on GKE, that simplifies the management, scaling and automatic upgrades of Istio on GKE clusters,
giving you the flexibility of Istio with the ease of a managed service.
Anthos Service Mesh is a service mesh built on top of Istio, and is designed to be used in conjunction with Google Cloud's Anthos platform. It
provides many of the same features as Istio, but it also includes some additional features that are specific to Anthos, such as support for hybrid and
multi-cloud environments.
upvoted 1 times
Selected Answer: D
https://cloud.google.com/architecture/service-meshes-in-microservices-architecture#security_2
https://cloud.google.com/architecture/service-meshes-in-microservices-architecture#security_2
upvoted 3 times
Answer D
upvoted 2 times
Selected Answer: D
option D
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/service-mesh/docs/overview#security_benefits
- Ensures encryption in transit. Using mTLS for authentication also ensures that all TCP communications are encrypted in transit.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 354/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 355/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application that will store and access sensitive unstructured data objects in a Cloud Storage bucket. To comply with
regulatory requirements, you need to ensure that all data objects are available for at least 7 years after their initial creation. Objects created more
than 3 years ago are accessed very infrequently (less than once a year). You need to configure object storage while ensuring that storage cost is
B. Use IAM Conditions to provide access to objects 7 years after the object creation date.
C. Enable Object Versioning to prevent objects from being accidentally deleted for 7 years after object creation.
D. Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years.
E. Implement a Cloud Function that checks the age of each object in the bucket and moves the objects older than 3 years to a second bucket
with the Archive Storage class. Use Cloud Scheduler to trigger the Cloud Function on a daily schedule.
Correct Answer: BD
Selected Answer: AD
AD is correct.
upvoted 1 times
Selected Answer: AD
Keys:
1) all data objects are available for at least 7 years after their initial creation : A. Set a retention policy on the bucket with a period of 7 years
2) Objects created more than 3 years ago are accessed very infrequently (less than once a year) : D. Create an object lifecycle policy on the bucket
that moves objects from Standard Storage to Archive Storage after 3 years.
upvoted 1 times
https://cloud.google.com/storage/docs/bucket-lock
https://cloud.google.com/storage/docs/lifecycle
upvoted 1 times
Selected Answer: AD
https://cloud.google.com/storage/docs/bucket-lock
https://cloud.google.com/storage/docs/lifecycle
Answer A,D
upvoted 2 times
Selected Answer: AD
A is correct because Cloud Storage provides an option to configure a retention lifecycle rule.
B is incorrect because it is not a recommended way to implement data retention requirements.
C is incorrect because it does not guarantee that objects are not deleted within 7 years after object creation.
D is correct because it’s the easiest and recommended way to implement a storage lifecycle policy to move objects from Standard to Archive tier.
E is incorrect because you do not require two buckets to store objects on two storage tiers.
upvoted 1 times
option AD
upvoted 1 times
Selected Answer: AD
AD is the answer.
https://cloud.google.com/storage/docs/bucket-lock
This page discusses the Bucket Lock feature, which allows you to configure a data retention policy for a Cloud Storage bucket that governs how
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 356/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
long objects in the bucket must be retained. The feature also allows you to lock the data retention policy, permanently preventing the policy from
being reduced or removed.
https://cloud.google.com/storage/docs/storage-classes#archive
Archive storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike the "coldest"
storage services offered by other Cloud providers, your data is available within milliseconds, not hours or days.
Archive storage is the best choice for data that you plan to access less than once a year.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 357/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application using different microservices that must remain internal to the cluster. You want the ability to configure each
microservice with a specific number of replicas. You also want the ability to address a specific microservice from any other microservice in a
uniform way, regardless of the number of replicas the microservice scales to. You plan to implement this solution on Google Kubernetes Engine.
A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to
B. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to
C. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the
D. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address to address the Pod from
Correct Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 358/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
A Is correct because the Service will have a DNS entry inside the cluster that other microservices can use to address the pods of the Deployment
that the Service is targetting.
B Is not correct because an Ingress exposes a Service using an external or internal HTTP(s) load balancer, and it does not apply directly to a
Deployment.
C is not correct because a Pod is a single instance of the microservice, whereas a Deployment can be configured with a number of replicas.
D is not correct because it combines the mistakes of options B and C.
upvoted 2 times
Selected Answer: A
Answer A
upvoted 1 times
Selected Answer: A
option A
upvoted 1 times
Selected Answer: A
A is the answer.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 359/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are building an application that uses a distributed microservices architecture. You want to measure the performance and system resource
utilization in one of the microservices written in Java. What should you do?
A. Instrument the service with Cloud Profiler to measure CPU utilization and method-level execution times in the service.
D. Instrument the service with OpenCensus to measure service latency, and write custom metrics to Cloud Monitoring.
Correct Answer: C
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
Selected Answer: A
A, use profiler
upvoted 1 times
Answer A
https://cloud.google.com/profiler/docs/profiling-java
https://cloud.google.com/appengine/docs/legacy/standard/java/microservice-performance
upvoted 3 times
A.
https://cloud.google.com/profiler/docs
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/profiler/docs/profiling-java
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 360/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is responsible for maintaining an application that aggregates news articles from many different sources. Your monitoring dashboard
contains publicly accessible real-time reports and runs on a Compute Engine instance as a web application. External stakeholders and analysts
need to access these reports via a secure channel without authentication. How should you configure this secure channel?
A. Add a public IP address to the instance. Use the service account key of the instance to encrypt the traffic.
B. Use Cloud Scheduler to trigger Cloud Build every hour to create an export from the reports. Store the reports in a public Cloud Storage
bucket.
C. Add an HTTP(S) load balancer in front of the monitoring dashboard. Configure Identity-Aware Proxy to secure the communication channel.
D. Add an HTTP(S) load balancer in front of the monitoring dashboard. Set up a Google-managed SSL certificate on the load balancer for
traffic encryption.
Correct Answer: B
Selected Answer: D
D. Add an HTTP(S) load balancer in front of the monitoring dashboard. Set up a Google-managed SSL certificate on the load balancer for traffic
encryption.
This option provides the most secure way to configure a publicly accessible channel for your monitoring dashboard without authentication. The
HTTP(S) load balancer will distribute traffic to the backend instances of the dashboard, and the Google-managed SSL certificate will encrypt all
traffic between the load balancer and the users.
upvoted 1 times
Selected Answer: D
This approach is the most secure and reliable way to configure a secure channel for external stakeholders and analysts to access the publicly
accessible real-time reports in your monitoring dashboard.
upvoted 1 times
SSL/TLS is must for data in transit encryption. Since without authentication, D is more suitable option. If authentication required, then we could
have chosen C.
upvoted 1 times
Selected Answer: D
option D
upvoted 1 times
D is the answer.
upvoted 2 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 361/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
D is correct. This provides an external HTTPS endpoint, and uses Google-managed services and a valid SSL certificate.
https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 362/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are planning to add unit tests to your application. You need to be able to assert that published Pub/Sub messages are processed by your
subscriber in order. You want the unit tests to be cost-effective and reliable. What should you do?
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Option B, creating a topic and subscription for each tester, would be costly and time-consuming as it would require creating and managing a large
number of topics and subscriptions. Additionally, it would not ensure that messages are processed in order, as messages may be delivered out of
order to different subscriptions.
Option D, using the Pub/Sub emulator, would be cost-effective and reliable as it allows you to test your application's Pub/Sub functionality locally
without incurring any costs. Additionally, the emulator allows you to easily assert that messages are processed in order by using the same topic
and subscription for all unit tests.
upvoted 2 times
Agree with D. They want unit test cost-effective and reliable so need an emulator which will never have an issue to do that.
B is not correct for me because the unit test using a real topic and subscription can have issue sometimes and it's not cost-effective to pay for each
tester a subscription. The b is more a solution for integration test.
upvoted 1 times
https://brightsec.com/blog/unit-testing-best-practices/
One scenario per test
upvoted 1 times
Selected Answer: D
Selected Answer: D
The answer is D. See https://cloud.google.com/pubsub/docs/emulator, "Testing apps locally with the emulator".
upvoted 3 times
option B
upvoted 1 times
B is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 363/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application deployed in Google Kubernetes Engine (GKE) that reads and processes Pub/Sub messages. Each Pod handles a fixed
number of messages per minute. The rate at which messages are published to the Pub/Sub topic varies considerably throughout the day and
You want to scale your GKE Deployment to be able to process messages in a timely manner. What GKE feature should you use to automatically
Correct Answer: C
Selected Answer: C
I go with C since we need to scale GKE Deployment to be able to process messages in a TIMELY manner. External metrics is more suitable for this.
upvoted 1 times
Selected Answer: C
C: https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#pubsub
upvoted 1 times
Selected Answer: C
Based on the requirement that the application reads and processes Pub/Sub messages, and that the rate at which messages are published to the
Pub/Sub topic varies considerably throughout the day and week, including occasional large batches of messages published at a single moment, the
best choice would be C: Horizontal Pod Autoscaler based on an external metric.
By using an external metric, the Horizontal Pod Autoscaler can monitor the number of messages in the Pub/Sub topic and adjust the number of
replicas in the GKE Deployment accordingly. This allows the application to automatically adapt to changes in the rate at which messages are being
published, ensuring that the pods are able to process messages in a timely manner.
On the other hand, Horizontal Pod Autoscaler based on resources utilization, it would not provide the needed functionality as it bases scaling on
resource usage of the pods, not the number of messages in the Pub/Sub topic
upvoted 1 times
Selected Answer: C
Selected Answer: C
Custom and external metrics allow workloads to adapt to conditions besides the workload itself. Consider an application that pulls tasks from a
queue and completes them.
An external metric is reported from an application or service not running on your cluster, but whose performance impacts your Kubernetes
application. For information, the metric could be reported from Cloud Monitoring or Pub/Sub. D isn't the answer, before selecting an answer ,
please do a thorough research and understand concepts and the key words in a question, D cant be the answer in this case.
https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics
upvoted 4 times
We need to scale using external metrics here. When Pod 1 is handling the maximum of "fixed number amount" messages, we need to spin up pod
2 etc...
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 364/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
Selected Answer: D
option D
upvoted 1 times
Selected Answer: D
D is the answer.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 365/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are using Cloud Run to host a web application. You need to securely obtain the application project ID and region where the application is
running and display this information to users. You want to use the most performant approach. What should you do?
A. Use HTTP requests to query the available metadata server at the http://metadata.google.internal/ endpoint with the Metadata-Flavor:
Google header.
B. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Navigate to the Cloud Run “Variables &
Secrets” tab, and add the desired environment variables in Key:Value format.
C. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Write the application configuration
D. Make an API call to the Cloud Asset Inventory API from the application and format the request to include instance metadata.
Correct Answer: B
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
Answer A
https://cloud.google.com/run/docs/container-contract#metadata-server
upvoted 2 times
https://cloud.google.com/run/docs/container-contract#metadata-server
Answer A
upvoted 1 times
Selected Answer: A
https://cloud.google.com/run/docs/container-contract#metadata-server
upvoted 2 times
Selected Answer: A
option A
upvoted 1 times
A is the answer.
https://cloud.google.com/run/docs/container-contract#metadata-server
Cloud Run container instances expose a metadata server that you can use to retrieve details about your container instance, such as the project ID,
region, instance ID or service accounts.
You can access this data from the metadata server using simple HTTP requests to the http://metadata.google.internal/ endpoint with the
Metadata-Flavor: Google header: no client libraries are required.
upvoted 1 times
Selected Answer: B
voting B because
is not A since thats compute metadata, if you can access you can query project id but not the region necessarily from cloud run but gce
is not D since you cannot query project id
C is too manual and static
so by discard I guess is B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 366/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 367/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to deploy resources from your laptop to Google Cloud using Terraform. Resources in your Google Cloud environment must be created
using a service account. Your Cloud Identity has the roles/iam.serviceAccountTokenCreator Identity and Access Management (IAM) role and the
necessary permissions to deploy the resources using Terraform. You want to set up your development environment to deploy the desired
A. 1. Download the service account’s key file in JSON format, and store it locally on your laptop.
2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your downloaded key file.
B. 1. Run the following command from a command line: gcloud config set auth/impersonate_service_account service-account-
2. Set the GOOGLE_OAUTH_ACCESS_TOKEN environment variable to the value that is returned by the gcloud auth print-access-token
command.
C. 1. Run the following command from a command line: gcloud auth application-default login.
2. In the browser window that opens, authenticate using your personal credentials.
D. 1. Store the service account's key file in JSON format in Hashicorp Vault.
2. Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate to Vault using a short-lived access token.
Correct Answer: D
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Selected Answer: B
B
1. impersonation
2. securely set up env variable that will be used by terraform to deploy
upvoted 1 times
Selected Answer: B
Answer is B
https://cloud.google.com/sdk/gcloud/reference/config/set#impersonate_service_account
upvoted 1 times
Selected Answer: B
https://cloud.google.com/blog/topics/developers-practitioners/using-google-cloud-service-account-impersonation-your-terraform-code
Answer B not D
upvoted 3 times
Selected Answer: B
A&D assume that you download and store SA keys, which violates best practices, since you potentially loose control over what happens to those
credentials and makes it impossible to track who actually uses the SA. D makes it even worse since it requires you to maintain you own secret
management to minimize the risk.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 368/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
C does nothing that would give you the SA permissions you need.
B follows best practices, since impersonation permissions can be managed transparently via IAM and via logs you can also see who
impersonated/used the SA.
upvoted 4 times
The question already says that you have the role for impersonating the service account.
This means that option B is a viable, as you can impersonate that service account, and get a token that has the required level of access to create
resources.
upvoted 2 times
Selected Answer: D
D is the answer.
https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys#file-system
Whenever possible, avoid storing service account keys on a file system. If you can't avoid storing keys on disk, make sure to restrict access to the
key file, configure file access auditing, and encrypt the underlying disk.
https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys#software-keystore
In situations where using a hardware-based key store isn't viable, use a software-based key store to manage service account keys. Similar to
hardware-based options, a software-based key store lets users or applications use service account keys without revealing the private key. Software-
based key store solutions can help you control key access in a fine-grained manner and can also ensure that each key access is logged.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 369/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company uses Cloud Logging to manage large volumes of log data. You need to build a real-time log analysis architecture that pushes logs
D. Create a Cloud Function to read Cloud Logging log entries and send them to the third-party application.
Correct Answer: C
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
My answer is A.
Third party service is the one responsible for analytics.
From Google cloud we just need to push the log messages to a third party application for analytics that is the part of analytics architecture.
Real time push means, I go with Pub-sub.
upvoted 1 times
https://cloud.google.com/logging/docs/export/configure_export_v2#overview
https://cloud.google.com/logging/docs/export/pubsub:
This document explains how you can find log entries that you routed from Cloud Logging to Pub/Sub topics, which occurs in near real-time. We
recommend using Pub/Sub for integrating Cloud Logging logs with third-party software.
When you route logs to a Pub/Sub topic, Logging publishes each log entry as a Pub/Sub message as soon as Logging receives that log entry.
Routed logs are generally available within seconds of their arrival to Logging, with 99% of logs available in less than 60 seconds.
upvoted 1 times
Selected Answer: A
The processing will be done in a third-party application so we need a solution to pass logs from gcp to thirs party in real time and no need for
analytics. So the solution is pub/sub.
Example on a case corresponding to the question by google:
https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk
upvoted 1 times
Therefore, for real-time log analysis, it is more appropriate to use a solution like Cloud Pub/Sub, which is specifically designed for real-time
streaming of data.
I would go for A
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 370/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
Answer B
Third party transfers for BigQuery Data Transfer Service allow you to automatically schedule and manage recurring load jobs for external data
sources such as Salesforce CRM, Adobe Analytics, and Facebook Ads.
https://cloud.google.com/bigquery/docs/introduction#bigquery-analytics
https://cloud.google.com/blog/products/data-analytics/bigquery-performance-powers-real-time-analytics
Pub/sub does real time streaming not analytics. analytics its biquery and dataflow those can do realtime analytics.
upvoted 1 times
Selected Answer: A
option B
upvoted 1 times
you cant analyse data on pub/sub but you stream, so understand the difference, answer is Bigquery
upvoted 1 times
Selected Answer: B
B is the answer.
upvoted 1 times
Selected Answer: B
vote B
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 371/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a new public-facing application that needs to retrieve specific properties in the metadata of users’ objects in their respective
Cloud Storage buckets. Due to privacy and data residency requirements, you must retrieve only the metadata and not the object data. You want to
maximize the performance of the retrieval process. How should you retrieve the metadata?
Correct Answer: D
Selected Answer: D
The requirement here is to access only the metadata. The metadata is stored in key-value pairs and hence that should be retrieved as fields request
parameter only.
upvoted 1 times
Selected Answer: D
https://cloud.google.com/storage/docs/json_api/v1/objects/get
alt:
Selected Answer: D
Answer D
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/storage/docs/json_api/v1/objects/get
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 372/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are deploying a microservices application to Google Kubernetes Engine (GKE) that will broadcast livestreams. You expect unpredictable traffic
patterns and large variations in the number of concurrent users. Your application must meet the following requirements:
C. Use cluster autoscaler to resize the number of nodes in the node pool, and use a Horizontal Pod Autoscaler to scale the workload.
D. Create a managed instance group for Compute Engine with the cluster nodes. Configure autoscaling rules for the managed instance group.
E. Create alerting policies in Cloud Monitoring based on GKE CPU and memory utilization. Ask an on-duty engineer to scale the workload by
executing a script when CPU and memory usage exceed predefined thresholds.
Correct Answer: CE
Selected Answer: AC
https://cloud.google.com/kubernetes-engine/docs/concepts/planning-scalability#choosing_multi-zonal_or_single-zone_node_pools
upvoted 5 times
Selected Answer: AC
AC is correct.
upvoted 1 times
Selected Answer: AC
1) Is resilient in the event of hardware failures -> Is resilient in the event of hardware failures
2) Scales automatically during popular events and maintains high availability -> Cluster Autoscalar + Horizontal POD Autoscalar
upvoted 1 times
Selected Answer: AC
A is for resiliency.
C is for scalability
upvoted 1 times
https://cloud.google.com/kubernetes-engine/docs/concepts/planning-scalability#choosing_multi-zonal_or_single-zone_node_pools : To deploy a
highly available application, distribute your workload across multiple compute zones in a region by using multi-zonal node pools which distribute
nodes uniformly across zones.
upvoted 1 times
Selected Answer: BC
https://cloud.google.com/kubernetes-engine/docs/concepts/planning-scalability#choosing_multi-zonal_or_single-zone_node_pools
Thw answer is B not A, so its BC
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 373/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-creating-a-highly-available
https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#multi-zonal_clusters
Answer A, C
upvoted 1 times
Answer B not A
upvoted 1 times
option A C
upvoted 1 times
AC is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 374/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You work at a rapidly growing financial technology startup. You manage the payment processing application written in Go and hosted on Cloud
Run in the Singapore region (asia-southeast1). The payment processing application processes data stored in a Cloud Storage bucket that is also
The startup plans to expand further into the Asia Pacific region. You plan to deploy the Payment Gateway in Jakarta, Hong Kong, and Taiwan over
the next six months. Each location has data residency requirements that require customer data to reside in the country where the transaction was
made. You want to minimize the cost of these deployments. What should you do?
A. Create a Cloud Storage bucket in each region, and create a Cloud Run service of the payment processing application in each region.
B. Create a Cloud Storage bucket in each region, and create three Cloud Run services of the payment processing application in the Singapore
region.
C. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run services of the payment processing application in
D. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run revisions of the payment processing application in
Correct Answer: A
Selected Answer: A
A is correct
upvoted 1 times
Selected Answer: A
A is ok
upvoted 1 times
Selected Answer: A
Answer is A
upvoted 1 times
Selected Answer: A
A is the answer.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 375/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently joined a new team that has a Cloud Spanner database instance running in production. Your manager has asked you to optimize the
Spanner instance to reduce cost while maintaining high reliability and availability of the database. What should you do?
A. Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity
required.
B. Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small
C. Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum
capacity required.
D. Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the
Correct Answer: C
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
Monitoring allows the behavior and requests per sec to Cloud spanner. By observing these parameter, we can optimize Spanner processing units in
small increments until we find the minimum capacity required. After these, we can fine tune Cloud spanner parameter so that costs and resource
utilization will be within the limit.
The key here is observe and improve.
upvoted 1 times
Selected Answer: C
option C
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
C is the answer.
https://cloud.google.com/spanner/docs/compute-capacity#increasing_and_decreasing_compute_capacity
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 376/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is
high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You
want to determine which Go functions consume the largest amount of CPU. What should you do?
A. Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code’s
performance.
B. Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application.
C. Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application.
D. Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Key here is flame graphs from the profiler. So using cloud profiler is the right choice.
upvoted 1 times
Selected Answer: D
https://cloud.google.com/profiler/docs
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production
applications. It attributes that information to the application's source code, helping you identify the parts of the application consuming the most
resources, and otherwise illuminating the performance characteristics of the code
upvoted 1 times
Selected Answer: D
option D
upvoted 1 times
D is the answer.
https://cloud.google.com/profiler/docs/about-profiler
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production
applications. It attributes that information to the source code that generated it, helping you identify the parts of your application that are
consuming the most resources, and otherwise illuminating your applications performance characteristics.
upvoted 1 times
D is correct
https://cloud.google.com/profiler/docs
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 377/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team manages a Google Kubernetes Engine (GKE) cluster where an application is running. A different team is planning to integrate with this
application. Before they start the integration, you need to ensure that the other team cannot make changes to your application, but they can
A. Using Identity and Access Management (IAM), grant the Viewer IAM role on the cluster project to the other team.
B. Create a new GKE cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.
C. Create a new namespace in the existing cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to
D. Create a new namespace in the existing cluster. Using Kubernetes role-based access control (RBAC), grant the Admin role on the new
Correct Answer: D
Selected Answer: D
D: You define permissions within a Role or ClusterRole object. A Role defines access to resources within a single Namespace, while a ClusterRole
defines access to resources in the entire cluster.
https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control
upvoted 1 times
Selected Answer: D
D is the answer.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 378/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have recently instrumented a new application with OpenTelemetry, and you want to check the latency of your application requests in Trace.
You want to ensure that a specific request is always traced. What should you do?
A. Wait 10 minutes, then verify that Trace captures those types of requests automatically.
B. Write a custom script that sends this type of request repeatedly from your dev project.
D. Add the X-Cloud-Trace-Context header to the request with the appropriate parameters.
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
To identify specific request is from setting the request header with specific key value pairs.
Key is X-Cloud-Trace-Context, Value is True.
upvoted 1 times
According to the Professional Google Cloud Developer documentation, to ensure that a specific request is always traced, the X-Cloud-Trace-
Context header must be added to the request with the appropriate parameters. This header ensures that all requests made to the application are
traced and added to the Trace list. Additionally, the documentation explains that the Trace ID and Span ID must be included in the header to ensure
that the request is correctly attributed to the trace. By using this method, developers can easily monitor and analyze the latency and performance
of their applications using Trace.
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/trace/docs/setup#force-trace
Cloud Trace doesn't sample every request.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 379/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are trying to connect to your Google Kubernetes Engine (GKE) cluster using kubectl from Cloud Shell. You have deployed your GKE cluster with
a public endpoint. From Cloud Shell, you run the following command:
You notice that the kubectl commands time out without returning an error message. What is the most likely cause of this issue?
A. Your user account does not have privileges to interact with the cluster using kubectl.
B. Your Cloud Shell external IP address is not part of the authorized networks of the cluster.
C. The Cloud Shell is not part of the same VPC as the GKE cluster.
Correct Answer: D
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Cloud shells public IP is not present in the authorized networks/IP list of the GKE cluster.
upvoted 1 times
Selected Answer: D
B is the answer.
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cloud_shell
If you want to use Cloud Shell to access the cluster, you must add the public IP address of your Cloud Shell to the cluster's list of authorized
networks.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 380/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a web application that contains private images and videos stored in a Cloud Storage bucket. Your users are anonymous and do
not have Google Accounts. You want to use your application-specific logic to control access to the images and videos. How should you configure
access?
A. Cache each web application user's IP address to create a named IP table using Google Cloud Armor. Create a Google Cloud Armor security
B. Grant the Storage Object Viewer IAM role to allUsers. Allow users to access the bucket after authenticating through your web application.
C. Configure Identity-Aware Proxy (IAP) to authenticate users into the web application. Allow users to access the bucket after authenticating
through IAP.
D. Generate a signed URL that grants read access to the bucket. Allow users to access the URL after authenticating through your web
application.
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
The key here is "application-specific logic to control access to the images and videos". Signed Url with Read only permission with limited access
time is the right choice.
upvoted 1 times
D is ok
https://cloud.google.com/storage/docs/access-control/signed-urls#should-you-use
upvoted 1 times
Selected Answer: D
D is the answer.
https://cloud.google.com/storage/docs/access-control/signed-urls#should-you-use
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to
control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the
user read, write, or delete access to that resource for a limited time. You specify an expiration time when you create the signed URL. Anyone who
knows the URL can access the resource until the expiration time for the URL is reached or the key used to sign the URL is rotated.
upvoted 2 times
Selected Answer: D
https://cloud.google.com/storage/docs/access-control/signed-urls#should-you-use
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 381/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to include a check that verifies that the containers can connect
to the database. If the Pod is failing to connect, you want a script on the container to run to complete a graceful shutdown. How should you
A. Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod
is failing.
B. Create the Deployment with a livenessProbe for the container that will fail if the container can't connect to the database. Configure a
Prestop lifecycle handler that runs the shutdown script if the container is failing.
C. Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs
D. Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the
Correct Answer: C
Selected Answer: B
https://cloud.google.com/architecture/best-practices-for-running-cost-effective-kubernetes-applications-on-
gke#make_sure_your_applications_are_shutting_down_in_accordance_with_kubernetes_expectations
upvoted 6 times
Selected Answer: B
I go with B, that is liveness probe and if failed for max retries then call prestop hook to gracefully shutdown the container. D is also very close, but it
used init container to check for the database connectivity first. I am not sure whether we can prestop hook if initContainer fails to starts.
upvoted 1 times
Selected Answer: B
https://cloud.google.com/architecture/best-practices-for-running-cost-effective-kubernetes-applications-on-
gke#make_sure_your_applications_are_shutting_down_in_accordance_with_kubernetes_expectations -> the preStop hook is a good option for
triggering a graceful shutdown without modifying the application.
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details ->
This hook is called immediately before a container is terminated due to an API request or management event such as a liveness/startup probe
failure, preemption, resource contention and others. A call to the PreStop hook fails if the container is already in a terminated or completed state
and the hook must complete before the TERM signal to stop the container can be sent. The Pod's termination grace period countdown begins
before the PreStop hook is executed, so regardless of the outcome of the handler, the container will eventually terminate within the Pod's
termination grace period. No parameters are passed to the handler.
upvoted 1 times
Answer B
upvoted 1 times
B is the answer.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 382/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 383/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are responsible for deploying a new API. That API will have three different URL paths:
• https://yourcompany.com/students
• https://yourcompany.com/teachers
• https://yourcompany.com/classes
You need to configure each API URL path to invoke a different function in your code. What should you do?
A. Create one Cloud Function as a backend service exposed using an HTTPS load balancer.
D. Create three Cloud Functions as three backend services exposed using an HTTPS load balancer.
Correct Answer: D
Selected Answer: B
I go with B. Exposed using an HTTPS load balancer is not required. Those three are different end points of the service. We no need to setup load
balancer in case of Cloud functions, it is serverless.
upvoted 1 times
Selected Answer: B
Option B (Create three Cloud Functions exposed directly) is the best choice in this scenario, as it allows you to create a separate Cloud Function for
each API URL path and configure each one to invoke a different function in your code.
Option A (Create one Cloud Function as a backend service exposed using an HTTPS load balancer) and Option D (Create three Cloud Functions as
three backend services exposed using an HTTPS load balancer) both involve using an HTTPS load balancer, which adds additional complexity and
configuration overhead. These options may be appropriate for more complex scenarios, but in this case, they are not necessary.
Option C (Create one Cloud Function exposed directly) would require all three API URL paths to invoke the same function in your code, which does
not meet the requirement of invoking different functions for each URL path.
upvoted 1 times
i choose D
upvoted 1 times
Each function is defined as an HTTP trigger, which allows it to be triggered by incoming HTTP requests. The endpoint for each function is defined
in the function name (e.g. "students", "teachers", "classes").
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 384/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
This means that the APIs would be accessible at the following endpoints:
• https://yourcompany.com/students
• https://yourcompany.com/teachers
• https://yourcompany.com/classes
In this case, option B, "Create three Cloud Functions exposed directly", would be correct.
upvoted 3 times
Selected Answer: D
option D
upvoted 1 times
Selected Answer: D
D is the answer.
upvoted 1 times
Selected Answer: D
D is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 385/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are deploying a microservices application to Google Kubernetes Engine (GKE). The application will receive daily updates. You expect to deploy
a large number of distinct containers that will run on the Linux operating system (OS). You want to be alerted to any known OS vulnerabilities in
the new containers. You want to follow Google-recommended best practices. What should you do?
A. Use the gcloud CLI to call Container Analysis to scan new container images. Review the vulnerability results before each deployment.
B. Enable Container Analysis, and upload new container images to Artifact Registry. Review the vulnerability results before each deployment.
C. Enable Container Analysis, and upload new container images to Artifact Registry. Review the critical vulnerability results before each
deployment.
D. Use the Container Analysis REST API to call Container Analysis to scan new container images. Review the vulnerability results before each
deployment.
Correct Answer: D
Selected Answer: B
B. Actually the tricky part for this question is: Is the Container Analysis enabled by default? Can Container Analysis be called on-demand via REST
without specifically enabled it? By default GCP does not enable Container Analysis; that's why D is out.
upvoted 2 times
https://cloud.google.com/artifact-registry/docs/analysis
When automatic scanning is enabled, scanning triggers automatically every time you push a new image to Artifact Registry or Container Registry.
Vulnerability information is continuously updated when new vulnerabilities are discovered.
When On-Demand Scanning is enabled, you must run a command to scan a local image or an image in Artifact Registry or Container Registry. On-
Demand Scanning gives you more flexibility around when you scan containers. For example, you can scan a locally-built image and remediate
vulnerabilities before storing it in a registry.
Scanning results are available for up to 48 hours after the scan is completed, and vulnerability information is not updated after the scan.
upvoted 1 times
Selected Answer: B
Container Analysis is a service that provides vulnerability scanning and metadata storage for containers. The scanning service performs vulnerability
scans on images in Container Registry and Artifact Registry, then stores the resulting metadata and makes it available for consumption through an
API. Metadata storage allows storing information from different sources, including vulnerability scanning, other Google Cloud services, and third-
party providers.
https://cloud.google.com/container-analysis/docs/container-analysis
upvoted 1 times
Selected Answer: B
option B
upvoted 1 times
B is the answer.
https://cloud.google.com/container-analysis/docs/automated-scanning-howto
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 386/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Answer B
If you have done Devops you will understand
upvoted 1 times
Selected Answer: B
Answer B
upvoted 1 times
Selected Answer: B
https://cloud.google.com/container-analysis/docs/os-overview says:
The Container Scanning API allows you to automate OS vulnerability detection, scanning each time you push an image to Container Registry or
Artifact Registry. Enabling this API also triggers language package scans for Go and Java vulnerabilities (Preview).
upvoted 1 times
https://cloud.google.com/container-analysis/docs/enable-container-scanning
upvoted 1 times
https://cloud.google.com/container-analysis/docs/os-overview
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 387/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer at a large organization. You have an application written in Go running in a production Google Kubernetes Engine (GKE) cluster.
You need to add a new feature that requires access to BigQuery. You want to grant BigQuery access to your GKE cluster following Google-
A. Create a Google service account with BigQuery access. Add the JSON key to Secret Manager, and use the Go client library to access the
JSON key.
B. Create a Google service account with BigQuery access. Add the Google service account JSON key as a Kubernetes secret, and configure
C. Create a Google service account with BigQuery access. Add the Google service account JSON key to Secret Manager, and use an init
D. Create a Google service account and a Kubernetes service account. Configure Workload Identity on the GKE cluster, and reference the
Correct Answer: D
Selected Answer: D
Workload Identity allows a Kubernetes service account in your GKE cluster to act as an IAM service account. Pods that use the configured
Kubernetes service account automatically authenticate as the IAM service account when accessing Google Cloud APIs. Using Workload Identity
allows you to assign distinct, fine-grained identities and authorization for each application in your cluster.
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity#what_is
upvoted 1 times
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity#what_is
Answer D
upvoted 1 times
Selected Answer: D
option D
upvoted 1 times
Selected Answer: D
Selected Answer: D
D is the answer.
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity#what_is
Applications running on GKE might need access to Google Cloud APIs such as Compute Engine API, BigQuery Storage API, or Machine Learning
APIs.
Workload Identity allows a Kubernetes service account in your GKE cluster to act as an IAM service account. Pods that use the configured
Kubernetes service account automatically authenticate as the IAM service account when accessing Google Cloud APIs. Using Workload Identity
allows you to assign distinct, fine-grained identities and authorization for each application in your cluster.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 388/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: A
vote A because the type of auth supported by bq and the recommended way of auth which is use go libraries
https://cloud.google.com/bigquery/docs/authorization
https://pkg.go.dev/golang.org/x/oauth2/google?utm_source=cloud.google.com&utm_medium=referral#JWTAccessTokenSourceFromJSON
upvoted 1 times
You have an application written in Python running in production on Cloud Run. Your application needs to read/write data stored in a Cloud Storage
bucket in the same project. You want to grant access to your application following the principle of least privilege. What should you do?
A. Create a user-managed service account with a custom Identity and Access Management (IAM) role.
B. Create a user-managed service account with the Storage Admin Identity and Access Management (IAM) role.
C. Create a user-managed service account with the Project Editor Identity and Access Management (IAM) role.
D. Use the default service account linked to the Cloud Run revision in production.
Correct Answer: A
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
principle of least privilege -> custom Identity and Access Management (IAM) role
upvoted 1 times
Selected Answer: A
Answer is A
The others give too many acess
upvoted 2 times
Selected Answer: A
A is the answer.
upvoted 2 times
Selected Answer: A
Not B - https://cloud.google.com/iam/docs/understanding-roles#storage.admin
C and D gives too many access
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 389/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is developing unit tests for Cloud Function code. The code is stored in a Cloud Source Repositories repository. You are responsible for
implementing the tests. Only a specific service account has the necessary permissions to deploy the code to Cloud Functions. You want to ensure
that the code cannot be deployed without first passing the tests. How should you configure the unit testing process?
A. Configure Cloud Build to deploy the Cloud Function. If the code passes the tests, a deployment approval is sent to you.
B. Configure Cloud Build to deploy the Cloud Function, using the specific service account as the build agent. Run the unit tests after
successful deployment.
C. Configure Cloud Build to run the unit tests. If the code passes the tests, the developer deploys the Cloud Function.
D. Configure Cloud Build to run the unit tests, using the specific service account as the build agent. If the code passes the tests, Cloud Build
Correct Answer: B
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Correct answer is D. First run unit tests and if all pass then deploy as Cloud Func.
upvoted 1 times
Selected Answer: D
Selected Answer: D
D. Configure Cloud Build to run the unit tests, using the specific service account as the build agent. If the code passes the tests, Cloud Build
deploys the Cloud Function.
This ensures that only the specific service account, which has the necessary permissions, is able to deploy the code after it has passed the unit tests.
The developer does not need to worry about deploying the code, and the code cannot be deployed without passing the tests.
upvoted 1 times
Answer D
upvoted 1 times
option D
upvoted 1 times
Selected Answer: D
D is the answer.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 390/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: D
Answer D
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 391/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team detected a spike of errors in an application running on Cloud Run in your production project. The application is configured to read
messages from Pub/Sub topic A, process the messages, and write the messages to topic B. You want to conduct tests to identify the cause of the
errors. You can use a set of mock messages for testing. What should you do?
A. Deploy the Pub/Sub and Cloud Run emulators on your local machine. Deploy the application locally, and change the logging level in the
application to DEBUG or INFO. Write mock messages to topic A, and then analyze the logs.
B. Use the gcloud CLI to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then analyze the
logs.
C. Deploy the Pub/Sub emulator on your local machine. Point the production application to your local Pub/Sub topics. Write mock messages
D. Use the Google Cloud console to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then
Correct Answer: C
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
A is right. Pub-Sub and cloud run emulator to run under local env with mock msgs publish to a topic, INFO and DEBUG logs enabled to see
detailed log info.
upvoted 1 times
Selected Answer: A
A is the answer.
upvoted 1 times
Selected Answer: A
going with A because it mentions the 2 points of posible failure and gives you a full scenario to analyse
upvoted 2 times
Selected Answer: A
https://cloud.google.com/pubsub/docs/emulator
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 392/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 393/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a Java Web Server that needs to interact with Google Cloud services via the Google Cloud API on the user's behalf. Users
should be able to authenticate to the Google Cloud API using their Google Cloud identities. Which workflow should you implement in your web
application?
A. 1. When a user arrives at your application, prompt them for their Google username and password.
2. Store an SHA password hash in your application's database along with the user's username.
3. The application authenticates to the Google Cloud API using HTTPs requests with the user's username and password hash in the
B. 1. When a user arrives at your application, prompt them for their Google username and password.
2. Forward the user's username and password in an HTTPS request to the Google Cloud authorization server, and request an access token.
3. The Google server validates the user's credentials and returns an access token to the application.
4. The application uses the access token to call the Google Cloud API.
C. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts
2. After the user signs in and provides consent, your application receives an authorization code from a Google server.
3. The Google server returns the authorization code to the user, which is stored in the browser's cookies.
4. The user authenticates to the Google Cloud API using the authorization code in the cookie.
D. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts
2. After the user signs in and provides consent, your application receives an authorization code from a Google server.
3. The application requests a Google Server to exchange the authorization code with an access token.
4. The Google server responds with the access token that is used by the application to call the Google Cloud API.
Correct Answer: C
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
D is right. OAuth 2.0 authorization code grant flow is the technique to use Google APIs to access resources servers.
upvoted 1 times
Selected Answer: D
https://developers.google.com/identity/protocols/oauth2
upvoted 1 times
D is the anwser
You need to use OAUTH of google so A and B are eliminated.
The C is using the authorization code in the cookie, it's not how it's works.
Selected Answer: D
D is the answer.
https://developers.google.com/identity/protocols/oauth2#webserver
The Google OAuth 2.0 endpoint supports web server applications that use languages and frameworks such as PHP, Java, Python, Ruby, and
ASP.NET.
The authorization sequence begins when your application redirects a browser to a Google URL; the URL includes query parameters that indicate
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 394/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
the type of access being requested. Google handles the user authentication, session selection, and user consent. The result is an authorization
code, which the application can exchange for an access token and a refresh token.
upvoted 1 times
Selected Answer: D
Selected Answer: D
I do agree with D
upvoted 1 times
Selected Answer: D
https://developers.google.com/identity/protocols/oauth2
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 395/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently developed a new application. You want to deploy the application on Cloud Run without a Dockerfile. Your organization requires that
all container images are pushed to a centrally managed container repository. How should you build your container using Google Cloud services?
(Choose two.)
D. Include the --source flag with the gcloud run deploy CLI command.
E. Include the --platform=kubernetes flag with the gcloud run deploy CLI command.
Correct Answer: CE
Selected Answer: BC
The actual question is "How should you build your container using Google Cloud services?", so it doesn't mention how to deploy it.
Also, if we exclude B, how is the image build in C ending up at the central container repository?
upvoted 1 times
Selected Answer: CD
C is packeto build pack to create an image. This is a very efficient way of creating images explicitly.
D is through gcloud run command. Not sure what framework cloud build uses to create an image.
upvoted 1 times
i choose cd
upvoted 1 times
Selected Answer: CD
https://cloud.google.com/run/docs/deploying-source-code
upvoted 1 times
A and B are the correct options because they both involve using Cloud Build to build the container image.
Option A, Push your source code to Artifact Registry, allows you to store the source code of your application in a central location, making it easier
to manage and version control.
Option B, Submit a Cloud Build job to push the image, allows you to use Cloud Build to build the container image, which is a recommended
method for building container images in a production environment. It allows you to automate the build process, test the image, and push it to a
container registry.
upvoted 2 times
Option D, Include the --source flag with the gcloud run deploy CLI command, is not correct because this flag is used to specify the source code
location when deploying the application, not building the container.
Option E, Include the --platform=kubernetes flag with the gcloud run deploy CLI command, is not correct because this flag is used to specify
the platform when deploying the application on Kubernetes and not Cloud Run.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 396/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 3 times
Selected Answer: CD
The Cloud run use the buildpacks to automatically build container images from source code but you need to use source code flag so you need to
add the --source flag to your command gcloud run deploy --source=/PATH/
Answer C & D
upvoted 1 times
CD is the answer.
https://cloud.google.com/run/docs/deploying-source-code
This page describes how to deploy new services and new revisions to Cloud Run directly from source code using a single gcloud CLI command,
gcloud run deploy with the --source flag.
Behind the scenes, this command uses Google Cloud's buildpacks and Cloud Build to automatically build container images from your source code
without having to install Docker on your machine or set up buildpacks or Cloud Build.
upvoted 1 times
Selected Answer: CD
C: Google Cloud for buildpacks—an open-source technology that makes it fast and easy for you to create secure, production-ready container
images from source code and without a Dockerfile.
https://cloud.google.com/blog/products/containers-kubernetes/google-cloud-now-supports-buildpacks (also mentioned by TNT87)
D: Deploying from source code. "This page describes how to deploy new services and new revisions to Cloud Run directly from source code using a
single gcloud CLI command, gcloud run deploy with the --source flag."
https://cloud.google.com/run/docs/deploying-source-code
A is incorrect because Artifact Registry is for container images, not source code.
B is incorrect because only the built image is needed to be deployed to Cloud Run. "A centrally managed container repository" can be somewhere
outside of Google, so as the build tool. It doesn't necessary to be built on Cloud Build.
E is irrelevant in this case, as K8S is not involved in this question.
upvoted 3 times
If it deploys to Cloud Run, it needs to be fully managed. Then the platform cannot be "kubernetes" - use the default value "managed" instead.
See https://cloud.google.com/sdk/gcloud/reference/run/deploy#--platform
upvoted 1 times
https://dev.to/alvardev/gcp-cloud-run-containers-without-dockerfile-2jh3
upvoted 1 times
upvoted 1 times
Selected Answer: AC
Answer A
https://cloud.google.com/run/docs/deploying#images
Answer C
https://cloud.google.com/blog/products/containers-kubernetes/google-cloud-now-supports-buildpacks
upvoted 1 times
Red this to understand, i can tell you have never done DevOps at all
https://cloud.google.com/run/docs/deploying-source-code
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 398/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You work for an organization that manages an online ecommerce website. Your company plans to expand across the world; however, the estore
currently serves one specific region. You need to select a SQL database and configure a schema that will scale as your organization grows. You
want to create a table that stores all customer transactions and ensure that the customer (CustomerId) and the transaction (TransactionId) are
A. Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the
TransactionId.
B. Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the
Transactionid.
C. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the
TransactionId.
D. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the
TransactionId.
Correct Answer: B
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
Requirement is to scale globally. Cloud spanner is the best fit. UUID as a transaction ID is good for security purpose...avoids guessing of next
transaction ID.
upvoted 1 times
Answer C, cloud spanner for multi-region and uui primary key to be sure to be unique
upvoted 1 times
Selected Answer: C
https://cloud.google.com/spanner/docs/schema-design#uuid_primary_key
Answer C
upvoted 1 times
Selected Answer: C
option C
upvoted 1 times
Globally available --> Cloud Spanner (multi-region). Cloud SQL is a regional service.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 399/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: C
C is the answer.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 400/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory
utilization. You need to determine which source code is consuming the most CPU and memory resources. What should you do?
A. Download, install, and start the Snapshot Debugger agent in your VM. Take debug snapshots of the functions that take the longest time.
Review the call stack frame, and identify the local variables at that level in the stack.
B. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google
C. Import OpenTelemetry and Trace export packages into your application, and create the trace provider.
Review the latency data for your application on the Trace overview page, and identify where bottlenecks are occurring.
D. Create a Cloud Logging query that gathers the web application's logs. Write a Python script that calculates the difference between the
timestamps from the beginning and the end of the application's longest functions to identity time-intensive functions.
Correct Answer: B
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Focus is to find which function is more CPU and Memory intensive. Flame graphs highlights the memory intensive functions in a graphical way. B is
the best answer.
upvoted 1 times
https://cloud.google.com/profiler/docs
upvoted 1 times
Selected Answer: B
B. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud
console to identify time-intensive functions.
Option B is the best solution because it involves importing the Cloud Profiler package into the application, initializing the Profiler agent, and
reviewing the generated flame graph in the Google Cloud console. This will allow you to identify time-intensive functions and determine which
source code is consuming the most CPU and memory resources. The flame graph is a visualization of the call stack and it can be used to identify
bottlenecks in the application.
Option A and C are also related to profiling but they don't exactly focus on identifying time-intensive functions. Option D is not the best option
because it would be more complex and less efficient than using a profiler.
upvoted 1 times
Selected Answer: B
https://cloud.google.com/profiler/docs/about-profiler#profiling_agent
https://cloud.google.com/profiler/docs/about-profiler#environment_and_languages
Answer B
upvoted 2 times
Selected Answer: B
option B
upvoted 1 times
Selected Answer: B
B is the answer.
https://cloud.google.com/profiler/docs/about-profiler
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 401/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
applications. It attributes that information to the source code that generated it, helping you identify the parts of your application that are
consuming the most resources, and otherwise illuminating your applications performance characteristics.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 402/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have a container deployed on Google Kubernetes Engine. The container can sometimes be slow to launch, so you have implemented a
liveness probe. You notice that the liveness probe occasionally fails on launch. What should you do?
Correct Answer: D
Selected Answer: A
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes
The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness
checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow
starting containers, avoiding them getting killed by the kubelet before they are up and running.
upvoted 6 times
Selected Answer: A
A startup probe is a probe that Kubernetes uses to determine if a container has started successfully. If the startup probe fails, Kubernetes will
restart the container.
upvoted 1 times
Selected Answer: D
Readiness probe is the right answer. Likeness probe fails if it tries to probe a container not yet ready to serve the traffic. So we need to add
readiness probe. There is no such thing called startup probe in kubernetes.
upvoted 1 times
Selected Answer: B
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Caution: Liveness probes do not wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use
initialDelaySeconds or a startupProbe.
upvoted 1 times
Selected Answer: B
A liveness probe checks if the container is running as expected, and if not, it restarts it. If the container is slow to launch, it may take some time for
it to fully start up and be able to respond to the liveness probe. Increasing the initial delay for the liveness probe can help mitigate this issue by
giving the container more time to start up before the probe begins checking its status. This can help reduce the likelihood of false-positive failures
during launch.
upvoted 1 times
A. Adding a startup probe is useful for determining when a container has started, but it won't help with the problem of the liveness probe
occasionally failing on launch.
B. Increasing the initial delay for the liveness probe might help if the container is taking longer than the delay to start, but it's not a guaranteed
solution.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 403/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
C. Increasing the CPU limit for the container may help if the container is running out of resources, but it may not be necessary if the issue is related
to the container's initialization process.
D. A readiness probe can help determine when a container is ready to receive traffic, but it won't help with the problem of the liveness probe
occasionally failing on launch.
upvoted 3 times
Selected Answer: D
The problem is that the liveness probes fires too early, so we need a startup probe to determine when liveness (and potential readiness) probe
are valid.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
upvoted 2 times
B is the answer.
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-
probes
upvoted 2 times
option B
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
upvoted 1 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 404/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to
test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s effect on sales in a randomized way.
Correct Answer: C
Selected Answer: A
Splitting traffic between versions using weights is a common way to implement A/B testing. To do this, you would create two versions of your
application, one with the new recommendation algorithm and one without. You would then configure the load balancer to split traffic between the
two versions using weights. For example, you could configure the load balancer to send 50% of traffic to the new version and 50% of traffic to the
old version.
upvoted 1 times
https://cloud.google.com/traffic-director/docs/advanced-traffic-management#weight-based_traffic_splitting_for_safer_deployments
upvoted 1 times
Selected Answer: A
https://cloud.google.com/traffic-director/docs/advanced-traffic-management#weight-based_traffic_splitting_for_safer_deployments
upvoted 1 times
Selected Answer: A
option A
upvoted 1 times
Selected Answer: A
A is the answer.
https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_actions_weight-based_traffic_splitting
Deploying a new version of an existing production service generally incurs some risk. Even if your tests pass in staging, you probably don't want to
subject 100% of your users to the new version immediately. With traffic management, you can define percentage-based traffic splits across
multiple backend services.
For example, you can send 95% of the traffic to the previous version of your service and 5% to the new version of your service. After you've
validated that the new production version works as expected, you can gradually shift the percentages until 100% of the traffic reaches the new
version of your service. Traffic splitting is typically used for deploying new versions, A/B testing, service migration, and similar processes.
upvoted 1 times
Selected Answer: A
A is the answer
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 405/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: A
https://cloud.google.com/traffic-director/docs/advanced-traffic-management#weight-based_traffic_splitting_for_safer_deployments
upvoted 2 times
Selected Answer: A
https://cloud.google.com/load-balancing/docs/https/traffic-management-global
upvoted 2 times
Selected Answer: A
https://cloud.google.com/architecture/implementing-deployment-and-testing-strategies-on-gke#split_the_traffic_2
https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_actions_weight-based_traffic_splitting
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 406/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might
not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best
C. Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%.
D. Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.
Correct Answer: D
Selected Answer: A
By performing a rolling update with a PDB of 80%, you can ensure that at least 80% of the Pods are always available during the deployment. This
will minimize the risk of downtime in case there are issues with the new revision.
upvoted 1 times
https://kubernetes.io/docs/tasks/run-application/configure-pdb/#identify-an-application-to-protect
upvoted 2 times
A rolling update with a PodDisruptionBudget (PDB) of 80% helps to minimize the risk of issues after deploying a new revision to a production
environment in GKE. The PDB specifies the number of pods in a deployment that must remain available during an update, ensuring that there is
sufficient capacity to handle any increase in traffic or demand. By setting a PDB of 80%, you ensure that at least 80% of the pods are available
during the update, reducing the risk of disruption to your application. This is a recommended best practice by Google for deploying updates to
production environments in GKE.
upvoted 1 times
https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
https://cloud.google.com/blog/products/containers-kubernetes/ensuring-reliability-and-uptime-for-your-gke-cluster
Answer A
upvoted 1 times
A is the answer.
https://cloud.google.com/blog/products/containers-kubernetes/ensuring-reliability-and-uptime-for-your-gke-cluster
Setting PodDisruptionBudget ensures that your workloads have a sufficient number of replicas, even during maintenance. Using the PDB, you can
define a number (or percentage) of pods that can be terminated, even if terminating them brings the current replica count below the desired value.
With PDB configured, Kubernetes will drain a node following the configured disruption schedule. New pods will be deployed on other available
nodes. This approach ensures Kubernetes schedules workloads in an optimal way while controlling the disruption based on the PDB configuration.
upvoted 1 times
Selected Answer: A
https://blog.knoldus.com/how-to-avoid-outages-in-your-kubernetes-cluster-using-pdb/
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 407/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Before promoting your new application code to production, you want to conduct testing across a variety of different users. Although this plan is
risky, you want to test the new version of the application with production users and you want to control which users are forwarded to the new
version of the application based on their operating system. If bugs are discovered in the new version, you want to roll back the newly deployed
A. Deploy your application on Cloud Run. Use traffic splitting to direct a subset of user traffic to the new version based on the revision tag.
B. Deploy your application on Google Kubernetes Engine with Anthos Service Mesh. Use traffic splitting to direct a subset of user traffic to the
C. Deploy your application on App Engine. Use traffic splitting to direct a subset of user traffic to the new version based on the IP address.
D. Deploy your application on Compute Engine. Use Traffic Director to direct a subset of user traffic to the new version based on predefined
weights.
Correct Answer: B
Selected Answer: B
Anthos Service Mesh is a fully managed service that provides a wide range of features for managing microservices, including traffic splitting. Traffic
splitting allows you to distribute traffic between different versions of your application based on a variety of factors, such as the user-agent header.
upvoted 1 times
B is perfect. Key is split traffic based on type of OS. So that info can be retrieved with user-agent header.
upvoted 1 times
B is the answer.
upvoted 1 times
Selected Answer: B
The key point for this question is the last two word of this statement "you want to control which users are forwarded to the new version of the
application based on their operating system". Operating system. Where could the developers find the OS for a certain user? That's the User-Agent
header. Example of a header: Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:42.0) Gecko/20100101 Firefox/42.0.
The requirement is "you want to control which users are forwarded to the new version of the application based on their operating system".
https://cloud.google.com/traffic-director/docs/ingress-traffic#sending-traffic
upvoted 3 times
Selected Answer: C
You can use traffic splitting to specify a percentage distribution of traffic across two or more of the versions within a service. Splitting traffic allows
you to conduct A/B testing between your versions and provides control over the pace when rolling out features.
https://cloud.google.com/appengine/docs/legacy/standard/python/splitting-traffic#ip_address_splitting Answer C
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 408/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is writing a backend application to implement the business logic for an interactive voice response (IVR) system that will support a
payroll application. The IVR system has the following technical characteristics:
• The IVR system creates a separate persistent gRPC connection to the backend for each session.
• If the connection is interrupted, the IVR system establishes a new connection, causing a slight latency for that call.
You need to determine which compute environment should be used to deploy the backend application. Using current call data, you determine that:
• There are significant spikes of calls around certain known dates (e.g., pay days), or when large payroll changes occur.
You want to minimize cost, effort, and operational overhead. Where should you deploy the backend application?
A. Compute Engine
C. Cloud Functions
D. Cloud Run
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Cloud Run is more suitable for gRPC communication between micro services.
The key here is "gRPC connection to the backend for each session".
upvoted 1 times
Answer D
upvoted 1 times
Selected Answer: D
D is the answer.
upvoted 2 times
Selected Answer: D
Answer D
This page shows Cloud Run-specific details for developers who want to use gRPC to connect a Cloud Run service with other services, for example,
to provide simple, high performance communication between internal microservices. You can use all gRPC types, streaming or unary, with Cloud
Run.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 409/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 410/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an application hosted on Google Cloud that uses a MySQL relational database schema. The application will have a large
volume of reads and writes to the database and will require backups and ongoing capacity planning. Your team does not have time to fully
manage the database but can take on small administrative tasks. How should you host the database?
A. Configure Cloud SQL to host the database, and import the schema into Cloud SQL.
B. Deploy MySQL from the Google Cloud Marketplace to the database using a client, and import the schema.
C. Configure Bigtable to host the database, and import the data into Bigtable.
D. Configure Cloud Spanner to host the database, and import the schema into Cloud Spanner.
E. Configure Firestore to host the database, and import the data into Firestore.
Correct Answer: D
Selected Answer: A
It is a good choice for applications that require a high volume of reads and writes, as well as regular backups and capacity planning.
upvoted 1 times
Selected Answer: A
I go with A, since it is a Cloud SQL is a fully managed service that involves less operational overheads.
upvoted 1 times
Selected Answer: A
A or D
cloud sql ideal for heavy reads and not ideal for heavy writes
spanner ideal for both reads/writes but more about global
anyway both are extremely fast - then go for A
upvoted 2 times
A. Configure Cloud SQL to host the database, and import the schema into Cloud SQL.
Cloud SQL is a fully-managed service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud. It
is specifically designed for MySQL, so it is a good fit for this use case. With Cloud SQL, you can automatically backup your data, and perform
capacity planning, so you don't have to worry about managing the infrastructure. Additionally, Cloud SQL provides high availability, automatic
failover and easy scaling.
Option D and E are not correct since Cloud Spanner is a NoSQL database and Firestore is a document-based database and not suitable for
Relational Database.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 411/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: A
The answer A is more likely to be the correct one. Although Cloud Spanner is also a relational DB service (and has certain advantages over Cloud
SQL), migrating from MySQL to Cloud Spanner is not as trivial as "import the schema" (stating by the answer D). If D has been excluded, the only
relational DB option in the answers is A: Cloud SQL.
https://cloud.google.com/spanner/docs/migrating-mysql-to-spanner#migration-process
upvoted 2 times
A is the answer.
https://cloud.google.com/sql/docs/mysql
Cloud SQL for MySQL is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational
databases on Google Cloud Platform.
upvoted 2 times
Selected Answer: A
Cloud SQL: Cloud SQL is a web service that allows you to create, configure, and use relational databases that live in Google's cloud. It is a fully-
managed service that maintains, manages, and administers your databases, allowing you to focus on your applications and services.
Answer A
upvoted 3 times
a large volume of reads and writes to the database and will require backups and ongoing capacity planning. Thats Bigtable. Changing my answer
to C
upvoted 1 times
Selected Answer: A
Answer A
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 412/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in
the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud
A. Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event
B. Create a build trigger that runs the build file in response to a repository code being pushed to the development branch.
C. Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL.
D. Create a Cron job that runs the following command every 24 hours: gcloud builds submit.
Correct Answer: B
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
I go with B.
Code commit to the repository should trigger the build process.
C is complicated because of webhook POST Url
upvoted 1 times
Selected Answer: B
https://cloud.google.com/build/docs/triggers
upvoted 1 times
Selected Answer: B
B is the answer.
https://cloud.google.com/build/docs/triggers
Cloud Build uses build triggers to enable CI/CD automation. You can configure triggers to listen for incoming events, such as when a new commit is
pushed to a repository or when a pull request is initiated, and then automatically execute a build when new events come in. You can also configure
triggers to build code on any changes to your source repository or only on changes that match certain criteria.
upvoted 1 times
Cloud Build enables you to build the container image, store the built image in Container Registry, and then deploy the image to Cloud Run.
upvoted 1 times
https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#connect_repo
Answer B
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 413/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer at a large organization. You are deploying a web application to Google Kubernetes Engine (GKE). The DevOps team has built a
CI/CD pipeline that uses Cloud Deploy to deploy the application to Dev, Test, and Prod clusters in GKE. After Cloud Deploy successfully deploys
the application to the Dev cluster, you want to automatically promote it to the Test cluster. How should you configure this process following
A. 1. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic.
2. Configure Cloud Build to include a step that promotes the application to the Test cluster.
B. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster.
2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the cloud-builds topic.
C. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster.
2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic.
2. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the cloud-builds topic.
Correct Answer: D
Selected Answer: C
I think it should be C.
upvoted 1 times
Selected Answer: A
Its either A or D. A is better since the topic is clouddeploy-operations. Once the msg is being published to this topic that means deployment has
been done successfully to the environment, so the next step is to deploy the containers in the test cluster. So I go with option A.
upvoted 1 times
https://cloud.google.com/functions/docs/calling/pubsub
https://cloud.google.com/deploy/docs/integrating#integrating_with_automated_testing
cloud deploy sends message
cloud build reads this message
upvoted 1 times
Selected Answer: A
https://cloud.google.com/build/docs/automate-builds-pubsub-events#console_2
upvoted 1 times
Selected Answer: C
https://cloud.google.com/functions/docs/calling/pubsub
upvoted 1 times
Selected Answer: C
This option (C) is recommended because it follows the best practice of using a serverless function, specifically Cloud Functions, for triggering
automated tasks in response to events. In this case, the function will be triggered by a SUCCEEDED message from the clouddeploy-operations
topic, indicating that the deployment to the Dev cluster has completed successfully. The function will then use the Google Cloud Deploy API to
promote the application to the Test cluster.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 414/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Using a Cloud Function in this way allows for a scalable, event-driven architecture and reduces the amount of infrastructure required to manage
the deployment process.
upvoted 2 times
In Option A, the Cloud Build trigger listens for SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic, and then promotes the
application to the Test cluster as part of the Cloud Build pipeline. This approach involves using a more complex and less scalable infrastructure
than using a serverless function like Cloud Functions.
On the other hand, Option C uses a Cloud Function to promote the application, which is a more streamlined, scalable, and event-driven
solution. Cloud Functions are designed specifically for triggering automated tasks in response to events, making them a better choice for this
type of use case.
upvoted 3 times
https://cloud.google.com/deploy/docs/integrating#integrating_with_automated_testing
https://cloud.google.com/deploy/docs/integrating#before_you_begin
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 415/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application is running as a container in a Google Kubernetes Engine cluster. You need to add a secret to your application using a secure
A. Create a Kubernetes Secret, and pass the Secret as an environment variable to the container.
B. Enable Application-layer Secret Encryption on the cluster using a Cloud Key Management Service (KMS) key.
C. Store the credential in Cloud KMS. Create a Google service account (GSA) to read the credential from Cloud KMS. Export the GSA as a .json
file, and pass the .json file to the container as a volume which can read the credential from Cloud KMS.
D. Store the credential in Secret Manager. Create a Google service account (GSA) to read the credential from Secret Manager. Create a
Kubernetes service account (KSA) to run the container. Use Workload Identity to configure your KSA to act as a GSA.
Correct Answer: A
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: A
What I have seen till now and done till now is option A. So I go with option A. What is the best secure approach between A and D, I am not sure.
So, very doubtfully I go with A.
upvoted 1 times
https://kubernetes.io/docs/concepts/configuration/secret/
upvoted 2 times
Selected Answer: D
A is not correct because a Kubernetes Secret only encodes the string, and anyone who can read the secret will be able to decode it.
upvoted 4 times
Selected Answer: A
Using D would also be a secure approach. Option D uses a combination of Google Secret Manager, Google Service Account, and Workload Identity
to store and retrieve secrets securely. The Workload Identity enables the Kubernetes Service Account to act as the Google Service Account, which
has the required permissions to read the secrets from Secret Manager.
Both options A and D are secure ways to store and retrieve secrets in a Kubernetes cluster, but option A is simpler and requires fewer steps. It may
be more appropriate for smaller or less complex environments, while option D provides more advanced security and management features and is
more suitable for larger and more complex environments.
upvoted 2 times
https://cloud.google.com/secret-manager/docs/overview
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 416/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: A
Secrets can be mounted as data volumes or exposed as environment variables to be used by a container in a Pod. Secrets can also be used by ...
https://cloud.google.com/secret-manager/docs/best-practices
https://kubernetes.io/docs/concepts/security/secrets-good-practices/
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 417/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer at a financial institution. You use Cloud Shell to interact with Google Cloud services. User data is currently stored on an
ephemeral disk; however, a recently passed regulation mandates that you can no longer store sensitive information on an ephemeral disk. You
need to implement a new storage solution for your user data. You want to minimize code changes. Where should you store your user data?
A. Store user data on a Cloud Shell home disk, and log in at least every 120 days to prevent its deletion.
Correct Answer: C
Selected Answer: B
Persistent disk is the right option to store sensitive info in this case. ( Obviously in the general sense, we should store user data in the data store )
Key points:
1) You use Cloud Shell to interact with Google Cloud services
2) You want to minimize code changes
upvoted 1 times
Selected Answer: B
Store user data in a Cloud Storage bucket is a good option for storing large amounts of data, but if you need to minimize code changes, using a
persistent disk in a Compute Engine instance may be a better fit as it provides a more direct replacement for an ephemeral disk with similar access
patterns, which will likely require fewer changes to your existing code. Storing user data in a Cloud Storage bucket would likely require more
significant changes to how your application interacts with the data.
upvoted 4 times
Selected Answer: B
https://cloud.google.com/shell/docs/how-cloud-shell-works#persistent_disk_storage
How Cloud Shell works
bookmark_border
Cloud Shell provisions a Compute Engine virtual machine running a Debian-based Linux operating system for your temporary use. This virtual
machine is owned and managed by Google Cloud, so will not appear within any of your GCP projects.
https://cloud.google.com/shell/docs/how-cloud-shell-works
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 418/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently developed a web application to transfer log data to a Cloud Storage bucket daily. Authenticated users will regularly review logs from
the prior two weeks for critical events. After that, logs will be reviewed once annually by an external auditor. Data must be stored for a period of no
less than 7 years. You want to propose a storage solution that meets these requirements and minimizes costs. What should you do? (Choose
two.)
A. Use the Bucket Lock feature to set the retention policy on the data.
B. Run a scheduled job to set the storage class to Coldline for objects older than 14 days.
C. Create a JSON Web Token (JWT) for users needing access to the Coldline storage buckets.
D. Create a lifecycle management policy to set the storage class to Coldline for objects older than 14 days.
E. Create a lifecycle management policy to set the storage class to Nearline for objects older than 14 days.
Correct Answer: BE
Selected Answer: AD
Selected Answer: AD
The requirement of storing data for a period of no less than 7 years can be met by setting the retention policy for the data in the Cloud Storage
bucket. This can be done using the Bucket Lock feature (A) or a lifecycle management policy (D), which can be set to retain the objects for the
required period of 7 years.
upvoted 3 times
https://cloud.google.com/storage/docs/bucket-lock
https://cloud.google.com/storage/docs/lifecycle
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 419/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is developing a Cloud Function triggered by Cloud Storage events. You want to accelerate testing and development of your Cloud
Function while following Google-recommended best practices. What should you do?
A. Create a new Cloud Function that is triggered when Cloud Audit Logs detects the cloudfunctions.functions.sourceCodeSet operation in the
original Cloud Function. Send mock requests to the new function to evaluate the functionality.
B. Make a copy of the Cloud Function, and rewrite the code to be HTTP-triggered. Edit and test the new version by triggering the HTTP
endpoint. Send mock requests to the new function to evaluate the functionality.
C. Install the Functions Frameworks library, and configure the Cloud Function on localhost. Make a copy of the function, and make edits to the
D. Make a copy of the Cloud Function in the Google Cloud console. Use the Cloud console's in-line editor to make source code changes to the
new function. Modify your web application to call the new function, and test the new version in production
Correct Answer: B
Selected Answer: C
Making a copy of the function for edits ensures that your changes do not affect the original function in production. It provides a controlled
environment for development and testing.
curl Testing:
Testing the new version using curl is a simple and effective way to send mock requests and evaluate the functionality of your Cloud Function
locally.
Using the Functions Frameworks library and local testing provides a development environment that is both efficient and aligned with Google-
recommended best practices for Cloud Functions development.
upvoted 1 times
Selected Answer: C
Option C is well suited for testing cloud functions in the local environment.
https://cloud.google.com/functions/docs/running/function-frameworks
upvoted 1 times
Because testing code on Cloud Functions itself involves waiting for deployed code and log entries to become available, running and testing your
function on your development machine can make the testing process (and, in turn, the development process) significantly faster."
C because: https://cloud.google.com/functions/docs/running/function-frameworks
upvoted 1 times
Selected Answer: C
https://cloud.google.com/functions/docs/running/calling#cloudevent_functions
upvoted 1 times
Selected Answer: C
https://cloud.google.com/functions/docs/running/overview#choosing_an_abstraction_layer
https://cloud.google.com/functions/docs/running/function-frameworks
https://cloud.google.com/functions/docs/running/calling#cloudevent-function-curl-tabs-storage
upvoted 1 times
Selected Answer: B
https://cloud.google.com/functions/docs/writing/write-event-driven-functions
https://cloud.google.com/functions/docs/calling/storage
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 420/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://firebase.google.com/docs/functions/gcp-storage-events
upvoted 1 times
Your team is setting up a build pipeline for an application that will run in Google Kubernetes Engine (GKE). For security reasons, you only want
images produced by the pipeline to be deployed to your GKE cluster. Which combination of Google Cloud services should you use?
Correct Answer: C
D is correct.
upvoted 1 times
Selected Answer: D
https://cloud.google.com/binary-authorization/docs/cloud-build
upvoted 2 times
Selected Answer: D
i choose D
upvoted 1 times
Selected Answer: D
I'd go with D
https://cloud.google.com/architecture/app-development-and-delivery-with-cloud-code-gcb-cd-and-gke#objectives
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 421/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are supporting a business-critical application in production deployed on Cloud Run. The application is reporting HTTP 500 errors that are
affecting the usability of the application. You want to be alerted when the number of errors exceeds 15% of the requests within a specific time
A. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Scheduler to trigger the Cloud Function daily and alert you if
B. Navigate to the Cloud Run page in the Google Cloud console, and select the service from the services list. Use the Metrics tab to visualize
the number of errors for that revision, and refresh the page daily.
C. Create an alerting policy in Cloud Monitoring that alerts you if the number of errors is above the defined threshold.
D. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Composer to trigger the Cloud Function daily and alert you if
Correct Answer: A
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C is a right answer.
Create an alert policy that alerts you if the number of errors exceeds 15% of the requests within a specific time window. Simple and straight
forward.
upvoted 2 times
Selected Answer: C
https://cloud.google.com/run/docs/monitoring#custom-metrics
https://cloud.google.com/run/docs/monitoring#add_alerts
https://cloud.google.com/monitoring/alerts
upvoted 1 times
Selected Answer: C
Option A involves creating a Cloud Function that is triggered by Cloud Scheduler, but this option does not fully address the requirement of being
alerted if the number of errors exceeds a specific threshold. Option A requires manual checking of the error count, whereas option C provides a
more automated solution by setting up an alerting policy in Cloud Monitoring that sends an alert if the number of errors exceeds the defined
threshold.
upvoted 3 times
Selected Answer: A
https://cloud.google.com/monitoring/alerts/policies-in-api#metric-polices
A has all 3 requirements , C is also good , but i will go with A
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 422/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to build a public API that authenticates, enforces quotas, and reports metrics for API callers. Which tool should you use to complete this
architecture?
A. App Engine
B. Cloud Endpoints
C. Identity-Aware Proxy
Correct Answer: D
Selected Answer: B
B is correct
upvoted 1 times
Selected Answer: B
https://cloud.google.com/endpoints
upvoted 2 times
Selected Answer: B
https://cloud.google.com/endpoints/docs/openapi/quotas-overview
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 423/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You noticed that your application was forcefully shut down during a Deployment update in Google Kubernetes Engine. Your application didn’t close
the database connection before it was terminated. You want to update your application to make sure that it completes a graceful shutdown. What
A. Update your code to process a received SIGTERM signal to gracefully disconnect from the database.
B. Configure a PodDisruptionBudget to prevent the Pod from being forcefully shut down.
Correct Answer: B
Selected Answer: A
This is the most direct and effective way to ensure that your application completes a graceful shutdown. When your application receives a SIGTERM
signal, it should use this signal as a trigger to disconnect from the database and complete any other necessary tasks before terminating.
upvoted 1 times
A is right. After caching the SIGTERM event that raised by Pod shutdown, we need to release the DB connection.
upvoted 1 times
A is a best practice
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
upvoted 2 times
Selected Answer: A
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
upvoted 1 times
i would choose A
upvoted 1 times
While a PodDisruptionBudget can help protect a Pod from being forcibly terminated during a deployment update, it does not ensure a graceful
shutdown of the application. Option A, updating the code to handle SIGTERM signals, is the recommended way to ensure a graceful shutdown in
the event of a termination.
upvoted 2 times
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
And here's a link to the official Kubernetes documentation on how to handle the SIGTERM signal in your application:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-graceful-shutdown-for-your-
application
upvoted 1 times
https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 424/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 425/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a lead developer working on a new retail system that runs on Cloud Run and Firestore in Datastore mode. A web UI requirement is for the
system to display a list of available products when users access the system and for the user to be able to browse through all products. You have
implemented this requirement in the minimum viable product (MVP) phase by returning a list of all available products stored in Firestore.
A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits
errors during busy times. This error coincides with spikes in the number of Datastore entity reads. You need to prevent Cloud Run from crashing
and decrease the number of Datastore entity reads. You want to use a solution that optimizes system performance. What should you do?
A. Modify the query that returns the product list using integer offsets.
B. Modify the query that returns the product list using limits.
D. Modify the query that returns the product list using cursors.
Correct Answer: C
Selected Answer: D
Cursors allow you to paginate through the results of a Firestore query. This can be useful for queries that return a large number of results, such as
the query that returns the list of all available products.
upvoted 1 times
D is correct.
Use pagination and return only results in batch/limits when querying for the list of products. This is called lazy loading.
upvoted 1 times
Selected Answer: D
https://cloud.google.com/datastore/docs/best-practices#queries
upvoted 1 times
Selected Answer: A
Note: To conserve memory and improve performance, a query should, whenever possible, specify a limit on the number of results returned.
https://cloud.google.com/datastore/docs/concepts/queries#cursors_limits_and_offsets
upvoted 1 times
While increasing the memory limits of Cloud Run instances could help alleviate the issue temporarily, it would not address the root cause of the
problem, which is the high number of Datastore entity reads during busy times. Over time, as more products are added to the system, this problem
would only become more severe, and you would have to continually increase the memory limits to prevent Cloud Run from crashing.
Using cursors to paginate the results and retrieve a limited number of products at a time is a more sustainable solution as it reduces the amount of
data that needs to be read from Datastore and decreases the memory usage of your Cloud Run instances. This way, you can maintain the
performance of the system and prevent it from crashing, even as more products are added over time.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 426/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: C
Cursors allow you to paginate through query results efficiently, which can help reduce the number of Datastore entity reads and prevent Cloud
Run instances from crashing due to exceeding memory limits. By using cursors, you can retrieve only a portion of the query results at a time,
instead of retrieving all results in one go, which can help optimize system performance.
upvoted 3 times
Using cursors to paginate through query results, as in option D, is a better solution because it allows you to retrieve only the necessary data
at a time, which can help reduce the number of Datastore entity reads and prevent Cloud Run instances from crashing.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 427/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to deploy an internet-facing microservices application to Google Kubernetes Engine (GKE). You want to validate new features using the
A/B testing method. You have the following requirements for deploying new container image releases:
• New production releases are tested and verified using a subset of production users.
A. 1. Configure your CI/CD pipeline to update the Deployment manifest file by replacing the container version with the latest version.
2. Recreate the Pods in your cluster by applying the Deployment manifest file.
3. Validate the application's performance by comparing its functionality with the previous release version, and roll back if an issue arises.
2. Create a Deployment configuration for the second namespace with the desired number of Pods.
4. Update the Ingress configuration to route traffic to the namespace with the new container versions.
2. Create two Deployments on the GKE cluster, and label them with different version names.
3. Implement an Istio routing rule to send a small percentage of traffic to the Deployment that references the new version of the application.
D. 1. Implement a rolling update pattern by replacing the Pods gradually with the new release version.
2. Validate the application's performance for the new subset of users during the rollout, and roll back if an issue arises.
Correct Answer: D
Selected Answer: C
https://cloud.google.com/architecture/implementing-deployment-and-testing-strategies-on-gke#perform_an_ab_test
i would say C:
To try this pattern, you perform the following steps:
Deploy the current version of the application (app:current) on the GKE cluster.
Deploy a new version of the application (app:new) alongside the current version.
Use Istio to route incoming requests that have the username test in the request's cookie to app:new. All other requests are routed to app:current.
upvoted 6 times
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
C looks good since "send a small percentage of traffic to the Deployment that references the new version of the application" for A/B testing.
D is close but not perfect for the said requirements.
upvoted 1 times
C. The keywords, "A/B testing", "verified using a subset of production users", mean we need canary deployment.
A: No. In-place deployment.
B: No. This is Blue/Green deployment, but Ingress config (=manifest) does not have way to specify subset of traffic routing to different namespace.
C: Yes.
D: No, there's no mechanism on Ingress / Services manifests that can specify a subset of users, plus this is rolling update (=in-place deployment)
upvoted 1 times
Selected Answer: C
upvoted 1 times
Selected Answer: B
This approach allows you to deploy new container images without downtime, as the traffic is only being redirected to the new namespace once the
Deployment is ready. This also allows you to test and verify the new production release using a subset of production users by routing only a
portion of the traffic to the new namespace.
upvoted 4 times
Selected Answer: D
https://auth0.com/blog/deployment-strategies-in-kubernetes/
Rolling updates are ideal because they allow you to deploy an application slowly with minimal overhead, minimal performance impact, and minimal
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 429/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team manages a large Google Kubernetes Engine (GKE) cluster. Several application teams currently use the same namespace to develop
microservices for the cluster. Your organization plans to onboard additional teams to create microservices. You need to configure multiple
environments while ensuring the security and optimal performance of each team’s work. You want to minimize cost and follow Google-
A. Create new role-based access controls (RBAC) for each team in the existing cluster, and define resource quotas.
B. Create a new namespace for each environment in the existing cluster, and define resource quotas.
D. Create a new namespace for each team in the existing cluster, and define resource quotas.
Correct Answer: A
Selected Answer: D
I will go with D.
upvoted 1 times
Selected Answer: B
B is correct
upvoted 1 times
Selected Answer: A
I go with A. Because of Security, low cost and Google-recommended best practices. I hope there is no need to create additional namespaces since
several application teams are already use the same namespace to develop microservices for the cluster.
upvoted 1 times
Selected Answer: B
Option B is the only one which addresses the part of the question that says 'You need to configure multiple environments'
upvoted 3 times
Selected Answer: A
I worried A or D.
I judged these teams are creating a microservice for each function on a learge same application by the explain of "to develop microservices for the
cluster" .
If it's true, you don't need to separate using namespace.
I think the thing you should protect is resources, for example the spanner for develop environment, the spanner for release environment and
forbidden other team's the spanner access.
In the case I think like that, I think this Q's answer is A.
upvoted 1 times
security
upvoted 1 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 430/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: A
To configure more granular access to Kubernetes resources at the cluster level or within Kubernetes namespaces, you use Role-Based Access
Control (RBAC). RBAC allows you to create detailed policies that define which operations and resources you allow users and service accounts to
access. With RBAC, you can control access for Google Accounts, Google Cloud service accounts, and Kubernetes service accounts. T
upvoted 2 times
Selected Answer: A
https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 431/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL. Due to regulatory
requirements, your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while
B. Configure your Cloud Run service to use a Serverless VPC Access connector.
D. Configure your application to connect to an instance of the Cloud SQL Auth proxy.
Correct Answer: C
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Selected Answer: B
Option B, using a Serverless VPC Access connector, is the recommended best practice for accessing a Cloud SQL instance from Cloud Run because
it provides a secure and scalable way to connect to your internal resources.
With this option, you can connect your Cloud Run service to your internal VPC network, allowing it to access resources such as Cloud SQL instances
that have internal IP addresses. This eliminates the need for a public IP address or a public network connection to your database, which can
increase security and regulatory compliance.
upvoted 1 times
Option C, using the Cloud SQL Java connector, is a valid way to connect to a Cloud SQL instance but does not provide the secure and scalable
VPC connectivity that is recommended by Google.
Option D, connecting to an instance of the Cloud SQL Auth proxy, is a valid way to connect to a Cloud SQL instance, but it requires additional
setup and maintenance, and may not be the most secure or scalable option, especially for large-scale deployments.
upvoted 1 times
Selected Answer: C
https://cloud.google.com/sql/docs/mysql/connect-connectors#setup-and-usage
If your application is written in Java you can skip this step, since you do this in the Java Cloud SQL Connector
upvoted 1 times
In this documentation, Google recommends using a Serverless VPC Access connector to connect to the internal IP address of a Cloud SQL
instance, which is a secure and scalable way to access resources in a VPC network.
upvoted 2 times
Using a Serverless VPC Access connector to connect to the internal IP address, as suggested by option B, provides a more secure and
performant solution. This method allows you to access the internal IP address of your Cloud SQL instance from a private network, bypassing
the public internet, and avoiding exposure to security threats.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 432/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 433/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your application stores customers’ content in a Cloud Storage bucket, with each object being encrypted with the customer's encryption key. The
key for each object in Cloud Storage is entered into your application by the customer. You discover that your application is receiving an HTTP 4xx
error when reading the object from Cloud Storage. What is a possible cause of this error?
A. You attempted the read operation on the object with the customer's base64-encoded key.
B. You attempted the read operation without the base64-encoded SHA256 hash of the encryption key.
C. You entered the same encryption algorithm specified by the customer when attempting the read operation.
D. You attempted the read operation on the object with the base64-encoded SHA256 hash of the customer's key.
Correct Answer: D
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
4xx is for Bad request, resource forbidden, not found and many more.
If we want to read the object of Cloud storage bucket programmatically, then we need to pass the same customer key that was used for encrypting
the object.
Selected Answer: B
According to the documentation the SHA256 is needed in the REST API -> B
https://cloud.google.com/storage/docs/encryption/using-customer-supplied-keys#rest-csek-download-object
upvoted 4 times
Selected Answer: D
Option D is a possible cause of an HTTP 4xx error when reading an object from Cloud Storage because it is incorrect to use the base64-encoded
SHA256 hash of the customer's encryption key to read an encrypted object. To read an encrypted object, you need to use the original encryption
key, not its hash. The HTTP 4xx error could be a result of an incorrect or unsupported key format, or a key mismatch. On the other hand, using the
base64-encoded key (Option A), the encryption algorithm (Option C), or the base64-encoded SHA256 hash of the encryption key (Option B)
without the original encryption key would not allow the object to be decrypted and read.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 434/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
1.You upload an object using a customer-supplied encryption key, and you attempt to perform another operation on the object (other than
requesting or updating most metadata or deleting the object) without providing the key.
2.You upload an object using a customer-supplied encryption key, and you attempt to perform another operation on the object with an incorrect
key.
3.You upload an object without providing a customer-supplied encryption key, and you attempt to perform another operation on the object with a
customer-supplied encryption key.
4.You specify an encryption algorithm, key, or SHA256 hash that is not valid.
Point number 2 has the answer
https://cloud.google.com/storage/docs/encryption/customer-supplied-keys#response
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 435/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have two Google Cloud projects, named Project A and Project B. You need to create a Cloud Function in Project A that saves the output in a
Cloud Storage bucket in Project B. You want to follow the principle of least privilege. What should you do?
3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B.
3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B.
2. Deploy the Cloud Function with the default App Engine service account in Project A.
3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B.
2. Deploy the Cloud Function with the default App Engine service account in Project A.
3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B.
Correct Answer: C
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Selected Answer: B
it's B.
https://articles.wesionary.team/multi-project-account-service-account-in-gcp-ba8f8821347e
upvoted 2 times
Selected Answer: B
A is not correct because you cannot run a Cloud Function with a service account that is not in the same Google Cloud project.
B is correct because it follows the least privilege principle and for a Cloud Function, the service account must be created in the same project where
the function is getting executed.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 436/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
In option B, a service account is created in Project A, but this service account would have access to all the resources within Project A, which is more
than is necessary for the task of saving output to a storage bucket in Project B.
Options C and D use the default App Engine service account, which would have more permissions than necessary, as it would have access to all
App Engine resources within Project A or B, rather than just the permissions needed for the task of saving output to a storage bucket in Project B.
upvoted 2 times
In this guide, it explains the best practice for providing authentication credentials to your application. By creating a separate Google service
account in the project that owns the resource you want to access (in this case, Project B), and then using that service account to perform
actions on the resource (writing to the Cloud Storage bucket in Project B), you are following the principle of least privilege. This means that
you are granting the minimum permissions necessary to perform the desired action.
upvoted 1 times
https://cloud.google.com/functions/docs/concepts/iam#runtime_service_accounts
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 437/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
A governmental regulation was recently passed that affects your application. For compliance purposes, you are now required to send a duplicate
of specific application logs from your application’s project to a project that is restricted to the security team. What should you do?
A. Create user-defined log buckets in the security team’s project. Configure a Cloud Logging sink to route your application’s logs to log buckets
B. Create a job that copies the logs from the _Required log bucket into the security team’s log bucket in their project.
C. Modify the _Default log bucket sink rules to reroute the logs into the security team’s log bucket.
D. Create a job that copies the System Event logs from the _Required log bucket into the security team’s log bucket in their project.
Correct Answer: B
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
I go with A.
This question is to test Cloud Logging Sink feature.
upvoted 1 times
Selected Answer: A
I choose option A because it provides a direct and automated solution for duplicating the specific application logs and sending them to the
security team's project. This method uses Cloud Logging's sink feature, which is a powerful tool for routing logs to other destinations, such as log
buckets or Pub/Sub topics. By using a sink, you can ensure that the duplication of logs is performed in real-time and automatically, which would
minimize manual intervention and minimize the risk of errors.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 438/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You plan to deploy a new Go application to Cloud Run. The source code is stored in Cloud Source Repositories. You need to configure a fully
managed, automated, continuous deployment pipeline that runs when a source code commit is made. You want to use the simplest deployment
A. Configure a cron job on your workstations to periodically run gcloud run deploy --source in the working directory.
B. Configure a Jenkins trigger to run the container build and deploy process for each source code commit to Cloud Source Repositories.
C. Configure continuous deployment of new revisions from a source repository for Cloud Run using buildpacks.
D. Use Cloud Build with a trigger configured to run the container build and deploy process for each source code commit to Cloud Source
Repositories.
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Selected Answer: C
This is because Google Cloud Run offers the ability to automate the deployment of new revisions directly from a source repository using
buildpacks. This is an extremely simple and managed way to set up a continuous deployment pipeline.
Option D, while a valid method for automating deployments, is not as simple as using Cloud Run's integrated deployment feature, as it involves the
additional service of Cloud Build.
upvoted 1 times
Selected Answer: D
https://cloud.google.com/run/docs/continuous-deployment-with-cloud-build
Cloud Build is a fully managed, scalable, and efficient service provided by Google Cloud that allows you to automate your software delivery
pipeline, including building, testing, and deploying applications. By using a trigger with Cloud Build, you can automatically build and deploy your
Go application to Cloud Run whenever a source code commit is made in Cloud Source Repositories. This provides a simple, fully managed solution
for continuous deployment, and eliminates the need for manual processes or external tools like Jenkins.
upvoted 1 times
Selected Answer: D
Cloud Build is a fully managed, scalable, and efficient service provided by Google Cloud that allows you to automate your software delivery
pipeline, including building, testing, and deploying applications. By using a trigger with Cloud Build, you can automatically build and deploy your
Go application to Cloud Run whenever a source code commit is made in Cloud Source Repositories. This provides a simple, fully managed solution
for continuous deployment, and eliminates the need for manual processes or external tools like Jenkins.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 439/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
https://cloud.google.com/run/docs/continuous-deployment-with-cloud-build
upvoted 1 times
Your team has created an application that is hosted on a Google Kubernetes Engine (GKE) cluster. You need to connect the application to a legacy
REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the target service in a way that
is resilient. You also want to be able to run health checks on the legacy service on a separate port. How should you set up the connection?
(Choose two.)
A. Use Traffic Director with a sidecar proxy to connect the application to the service.
B. Use a proxyless Traffic Director configuration to connect the application to the service.
C. Configure the legacy service's firewall to allow health checks originating from the proxy.
D. Configure the legacy service's firewall to allow health checks originating from the application.
E. Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
Correct Answer: AC
AC are correct.
upvoted 1 times
i agree, AC
upvoted 1 times
Selected Answer: AC
A. Using Traffic Director with a sidecar proxy can provide resilience for your application by allowing for failover to the secondary region in the event
of an outage. The sidecar proxy can route traffic to the legacy service in either of the two GKE clusters, ensuring high availability.
C. Configuring the legacy service's firewall to allow health checks originating from the proxy allows the proxy to periodically check the health of the
legacy service and ensure that it is functioning properly. This helps to ensure that traffic is only routed to healthy instances of the legacy service,
further improving the resilience of the setup.
upvoted 2 times
Selected Answer: AC
https://cloud.google.com/load-balancing/docs/health-checks#health_check_categories_protocols_and_ports
upvoted 1 times
Selected Answer: AC
https://cloud.google.com/traffic-director/docs/advanced-setup#routing-rule-maps
https://cloud.google.com/traffic-director/docs/advanced-setup
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 440/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application running in a production Google Kubernetes Engine (GKE) cluster. You use Cloud Deploy to automatically deploy your
application to your production GKE cluster. As part of your development process, you are planning to make frequent changes to the application’s
source code and need to select the tools to test the changes before pushing them to your remote source code repository. Your toolset must meet
Which tools should you use to test building and running a container on your laptop using minimal resources?
Correct Answer: C
Selected Answer: C
Minikube is a tool for running Kubernetes locally on your laptop. Skaffold is a tool for scaffolding, building, and deploying Kubernetes applications.
upvoted 1 times
Selected Answer: C
C is the correct choice. Since GKE local environment is required, Minikube and scaffold are right choices.
upvoted 1 times
Selected Answer: C
Minikube is a tool that runs a single-node Kubernetes cluster locally on your laptop, allowing you to test and run your application on a simulated
production environment. Skaffold is a command line tool that automates the process of building and deploying your application to a local or
remote Kubernetes cluster.
Together, Minikube and Skaffold allow you to test your frequent changes locally, with a deployment that emulates a production environment, using
minimal resources. Minikube provides the simulated production environment, while Skaffold takes care of building and deploying your application,
making the development process smoother and more efficient.
upvoted 3 times
Selected Answer: C
Answer C
Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one
node. Minikube is available for Linux, macOS, and Windows systems.
Skaffold is a tool that handles the workflow for building, pushing and deploying your application. You can use Skaffold to easily configure a local
development workspace, streamline your inner development loop, and integrate with other tools such as Kustomize and Helm to help manage
your Kubernetes manifests
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 441/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are deploying a Python application to Cloud Run using Cloud Source Repositories and Cloud Build. The Cloud Build pipeline is shown below:
You want to optimize deployment times and avoid unnecessary steps. What should you do?
B. Deploy a new Docker registry in a VPC, and use Cloud Build worker pools inside the VPC to run the build pipeline.
C. Store image artifacts in a Cloud Storage bucket in the same region as the Cloud Run instance.
D. Add the --cache-from argument to the Docker build step in your build config file.
Correct Answer: D
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
Selected Answer: D
https://cloud.google.com/build/docs/optimize-builds/speeding-up-builds#using_a_cached_docker_image
upvoted 1 times
Option D, adding the --cache-from argument to the Docker build step in the build config file, would be the best option to optimize deployment
times.
The --cache-from argument allows you to specify a list of images that Docker should use as a cache source when building the image. If a layer in
the current build matches a layer in one of the cache source images, Docker uses the cached layer instead of building it again, reducing the build
time.
Options A and C may not have a significant impact on deployment times, and option B would likely add complexity and increase deployment
times, as it would require deploying and managing a new Docker registry and using a VPC-based Cloud Build worker pool.
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 442/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: D
https://cloud.google.com/build/docs/optimize-builds/speeding-up-builds#using_a_cached_docker_image
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 443/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an event-driven application. You have created a topic to receive messages sent to Pub/Sub. You want those messages to be
processed in real time. You need the application to be independent from any other system and only incur costs when new messages arrive. How
A. Deploy the application on Compute Engine. Use a Pub/Sub push subscription to process new messages in the topic.
B. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to invoke the Cloud Function. Use the Pub/Sub API to create a pull
C. Deploy the application on Google Kubernetes Engine. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read
D. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to handle new messages in the topic.
Correct Answer: B
Selected Answer: D
Selected Answer: D
I would go with D.
upvoted 1 times
Selected Answer: B
Selected Answer: D
Selected Answer:D
https://cloud.google.com/functions/docs/calling/pubsub
We selected D based on our experience with Cloud Functions and the material at the URL above.
Since messages can be obtained from Cloud Functions arguments, we are not aware of the description of Subscription.
"only incur costs when new messages arrive." so it's OK to process on the trigger.
I don't think real time means so strictly.
For the life of me, I can't find any reason why D is wrong, and it seems to me that B is an error because of the extra processing.
upvoted 4 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 444/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: B
Option D is not ideal because using a Pub/Sub trigger to handle new messages in a topic is not the most efficient way to process messages in real
time. In a trigger-based architecture, Cloud Functions are invoked only when new messages are available, so there is a possibility of delays in
processing.
On the other hand, Option B provides a more efficient architecture for real-time processing. A Cloud Function is invoked for each message received
in the Pub/Sub topic, providing immediate processing as messages arrive. This way, the application is independent from any other system and
incurs costs only when new messages arrive, fulfilling the requirements stated in the question.
upvoted 4 times
Selected Answer: B
https://cloud.google.com/solutions/event-driven-architecture-pubsub
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 445/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have an application running on Google Kubernetes Engine (GKE). The application is currently using a logging library and is outputting to
standard output. You need to export the logs to Cloud Logging, and you need the logs to include metadata about each request. You want to use
A. Change your application’s logging library to the Cloud Logging library, and configure your application to export logs to Cloud Logging.
B. Update your application to output logs in JSON format, and add the necessary metadata to the JSON.
C. Update your application to output logs in CSV format, and add the necessary metadata to the CSV.
D. Install the Fluent Bit agent on each of your GKE nodes, and have the agent export all logs from /var/log.
Correct Answer: C
"By default, GKE clusters are natively integrated with Cloud Logging (and Monitoring). When you create a GKE cluster, both Monitoring and Cloud
Logging are enabled by default."
"GKE deploys a per-node logging agent that reads container logs, adds helpful metadata, and then sends the logs to the logs router, which sends
the logs to Cloud Logging and any of the Logging sink destinations that you have configured. Cloud Logging stores logs for the duration that you
specify or 30 days by default. Because Cloud Logging automatically collects standard output and error logs for containerized processes, you can
start viewing your logs as soon as your application is deployed."
Source: https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running-kubernetes-engine
upvoted 9 times
Selected Answer: B
B for me: we're looking for the simplest method and I feel it's easier to configure the existing library to output JSON and include some context
metadata rather than changing every log statement to use Cloud Logging library.
upvoted 1 times
Selected Answer: A
The key here is "use the simplest method to accomplish this...", using Cloud logging library is a very simple and straight forward solution. If the app
is running on the single and multiple VMs in a instance group, then installing the cloud logging agent must be the correct answer. This
environment is GKE cluster and separate some normal VM workflow.
upvoted 1 times
When you write logs from your service or job, they will be picked up automatically by Cloud Logging so long as the logs are written to any of these
locations:
https://cloud.google.com/run/docs/logging#container-logs
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 446/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
upvoted 1 times
Selected Answer: A
Option D, installing the Fluent Bit agent on each of your GKE nodes, is not the most straightforward method for exporting logs to Cloud Logging,
as it requires manual configuration and management of the Fluent Bit agent. While Fluent Bit can be used to collect and forward logs to Cloud
Logging, it is typically used for more complex logging scenarios where custom log processing is required.
Using the Cloud Logging library, as described in Option A, is a simpler and more direct method for exporting logs to Cloud Logging, as it
eliminates the need to manage an additional log agent and provides a more integrated solution for logging in a GKE environment.
upvoted 4 times
https://cloud.google.com/run/docs/logging#container-logs
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 447/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are working on a new application that is deployed on Cloud Run and uses Cloud Functions. Each time new features are added, new Cloud
Functions and Cloud Run services are deployed. You use ENV variables to keep track of the services and enable interservice communication, but
the maintenance of the ENV variables has become difficult. You want to implement dynamic discovery in a scalable way. What should you do?
A. Configure your microservices to use the Cloud Run Admin and Cloud Functions APIs to query for deployed Cloud Run services and Cloud
B. Create a Service Directory namespace. Use API calls to register the services during deployment, and query during runtime.
C. Rename the Cloud Functions and Cloud Run services endpoint is using a well-documented naming convention.
D. Deploy Hashicorp Consul on a single Compute Engine instance. Register the services with Consul during deployment, and query during
runtime.
Correct Answer: C
Selected Answer: B
B is correct.
upvoted 1 times
Selected Answer: B
Service Directory provides a scalable way to manage the registration and discovery of services. By creating a namespace, you can use API calls to
register your Cloud Run and Cloud Functions services, and query them during runtime. This allows for dynamic discovery and eliminates the need
for manually updating environment variables. Service Directory also provides features such as service health checks and metadata, which can be
used to further improve the reliability and scalability of your application.
upvoted 2 times
Selected Answer: B
https://medium.com/google-cloud/fine-grained-cloud-dns-iam-via-service-directory-446058b4362e
https://cloud.google.com/service-directory/docs/overview
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 448/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You work for a financial services company that has a container-first approach. Your team develops microservices applications. A Cloud Build
pipeline creates the container image, runs regression tests, and publishes the image to Artifact Registry. You need to ensure that only containers
that have passed the regression tests are deployed to Google Kubernetes Engine (GKE) clusters. You have already enabled Binary Authorization on
A. Create an attestor and a policy. After a container image has successfully passed the regression tests, use Cloud Build to run Kritis Signer to
B. Deploy Voucher Server and Voucher Client components. After a container image has successfully passed the regression tests, run Voucher
C. Set the Pod Security Standard level to Restricted for the relevant namespaces. Use Cloud Build to digitally sign the container images that
D. Create an attestor and a policy. Create an attestation for the container images that have passed the regression tests as a step in the Cloud
Build pipeline.
Correct Answer: A
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
I go with A since it is detailed and more specific about Kritis digital signature.
upvoted 1 times
Selected Answer: A
A. For folks wonder what differences between Kritis Signer and Voucher Server Voucher Client, I asked Google Bard about it. Bard stated Kritis
Signer is a command-line tools, whereas Voucher Server Voucher Client is a web-based tool. I then tried to verify that with Google search and
Google image search (search "voucher server voucher client" then click Images). It seems Bard report correctly. Someone even wrote a Kritis Signer
integrated pipeline with terraform (https://xebia.com/blog/how-to-automate-the-kritis-signer-on-google-cloud-platform/) .
Also, yes, both Kritis Signer and Voucher Server Voucher Client have Google official documentations. However, if you look carefully on Voucher
Server Voucher Client Google official doc, they use curl to the Voucher Server address, which indirectly prove Vouch Server Vouch Client is a web-
based tool.
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 449/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Kritis Signer is an open source command-line tool that can create Binary Authorization attestations based on a policy that you configure. You can
also use Kritis Signer to create attestations after checking an image for vulnerabilities identified by Container Analysis.
https://cloud.google.com/binary-authorization/docs/creating-attestations-kritis
upvoted 1 times
Binary Authorization in GKE provides a way to enforce that only verified container images are deployed in a cluster. In this scenario, to ensure that
only containers that have passed the regression tests are deployed, you would create an attestor and a policy in Binary Authorization, and use Kritis
Signer to create an attestation for the container image after it has passed the tests. The attestation verifies that the image meets the policy's criteria
and is authorized to be deployed. This provides a secure and automated way to enforce that only containers that have passed the required tests
are deployed in the cluster.
upvoted 1 times
https://cloud.google.com/binary-authorization/docs/creating-attestations-kritis
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 450/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are reviewing and updating your Cloud Build steps to adhere to best practices. Currently, your build steps include:
You need to add a step to perform a vulnerability scan of the built container image, and you want the results of the scan to be available to your
deployment pipeline running in Google Cloud. You want to minimize changes that could disrupt other teams’ processes. What should you do?
A. Enable Binary Authorization, and configure it to attest that no vulnerabilities exist in a container image.
B. Upload the built container images to your Docker Hub instance, and scan them for vulnerabilities.
C. Enable the Container Scanning API in Artifact Registry, and scan the built container images for vulnerabilities.
D. Add Artifact Registry to your Aqua Security instance, and scan the built container images for vulnerabilities.
Correct Answer: D
Selected Answer: C
C is correct.
upvoted 1 times
Selected Answer: C
Selected Answer: C
i choose C
upvoted 2 times
Selected Answer: C
Enabling the Container Scanning API in Artifact Registry and scanning the built container images is a best practice because it allows you to perform
security scans within the same environment where the built images are stored. This helps minimize the changes that could disrupt other teams'
processes, as the images are already in Artifact Registry, and the scanning results can be easily accessed by the deployment pipeline in Google
Cloud. Additionally, the Container Scanning API integrates with Google Cloud security and governance tools, allowing you to enforce security
policies and manage vulnerabilities in a centralized and automated way.
upvoted 1 times
Selected Answer: C
https://cloud.google.com/container-analysis/docs/automated-scanning-howto#view_the_image_vulnerabilities
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 451/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing an online gaming platform as a microservices application on Google Kubernetes Engine (GKE). Users on social media are
complaining about long loading times for certain URL requests to the application. You need to investigate performance bottlenecks in the
application and identify which HTTP requests have a significantly high latency span in user requests. What should you do?
A. Configure GKE workload metrics using kubectl. Select all Pods to send their metrics to Cloud Monitoring. Create a custom dashboard of
application metrics in Cloud Monitoring to determine performance bottlenecks of your GKE cluster.
B. Update your microservices to log HTTP request methods and URL paths to STDOUT. Use the logs router to send container logs to Cloud
Logging. Create filters in Cloud Logging to evaluate the latency of user requests across different methods and URL paths.
C. Instrument your microservices by installing the OpenTelemetry tracing package. Update your application code to send traces to Trace for
inspection and analysis. Create an analysis report on Trace to analyze user requests.
D. Install tcpdump on your GKE nodes. Run tcpdump to capture network traffic over an extended period of time to collect data. Analyze the
Correct Answer: A
Selected Answer: C
This approach allows you to update your application code to send traces to Trace for inspection and analysis. You can then create an analysis report
on Trace to analyze user requests. This will help you identify which HTTP requests have a significantly high latency span in user requests, which
seems to be the main concern according to the complaints from users on social media.
upvoted 1 times
Selected Answer: C
Selected Answer: C
This is the best way to investigate performance bottlenecks in a microservices application. By using OpenTelemetry, you can collect traces from all
of your microservices and analyze them in Trace. This will allow you to identify which requests are taking the longest and where the bottlenecks are
occurring.
upvoted 2 times
Selected Answer: C
correcting my choice
upvoted 1 times
question clearly says: performance botlenecks and which step is having latency ---> Cloud Trace
https://cloud.google.com/trace/docs/overview
upvoted 1 times
Selected Answer: C
https://cloud.google.com/trace/docs/setup#when-to-instrument
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 452/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Instrumenting your microservices with the OpenTelemetry tracing package, updating your application code to send traces to Trace for inspection
and analysis, and creating an analysis report on Trace would be the recommended solution for investigating performance bottlenecks in the
application and identifying HTTP requests with high latency. This would allow you to visualize and analyze the complete request-response cycle
and identify specific parts of the application that might be contributing to long loading times.
upvoted 1 times
https://cloud.google.com/stackdriver/docs/solutions/gke/workload-metrics
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 453/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to load-test a set of REST API endpoints that are deployed to Cloud Run. The API responds to HTTP POST requests. Your load tests must
You want to follow Google-recommended best practices. How should you configure the load testing?
A. Create an image that has cURL installed, and configure cURL to run a test plan. Deploy the image in a managed instance group, and run one
B. Create an image that has cURL installed, and configure cURL to run a test plan. Deploy the image in an unmanaged instance group, and run
C. Deploy a distributed load testing framework on a private Google Kubernetes Engine cluster. Deploy additional Pods as needed to initiate
D. Download the container image of a distributed load testing framework on Cloud Shell. Sequentially start several instances of the container
Correct Answer: D
Selected Answer: C
Requirements are very clear. Load testing with concurrent users/threads + Multiple source origins/IP address, C is the best choice.
upvoted 1 times
Selected Answer: D
Selected Answer: C
Option D, which involves starting several instances of a load testing framework container on Cloud Shell, may not be a recommended approach for
several reasons:
Cloud Shell is a shell environment for managing resources hosted on Google Cloud and does not provide a scalable infrastructure for running load
tests.
Starting several instances of a container on Cloud Shell is not a highly available or scalable solution for load testing, and may not provide sufficient
parallelism or control over the source IP addresses of the traffic.
Using a private Google Kubernetes Engine cluster to deploy a distributed load testing framework allows for scaling up the load testing by
deploying additional Pods, which can provide more control over the number of concurrent users and the source IP addresses of the traffic, and can
provide a more robust and scalable infrastructure for load testing.
upvoted 3 times
But
"Starting several instances of a container on Cloud Shell is not a highly available or scalable solution for load testing, and may not provide
sufficient parallelism or control over the source IP addresses of the traffic."
I can't agree with this explanation.
Please teach me where this explanation is written.
It's normal to launch some compute engine from cloud shell.
I think we can increase right load by increasing Compute Engine which do load-test from Cloud Shell step by step.
Can the test from GKE cover the condition which is "from multiple source IP addresses.".
I think this question's answer is D.
upvoted 1 times
Selected Answer: C
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 455/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team is creating a serverless web application on Cloud Run. The application needs to access images stored in a private Cloud Storage
bucket. You want to give the application Identity and Access Management (IAM) permission to access the images in the bucket, while also
securing the services using Google-recommended best practices. What should you do?
A. Enforce signed URLs for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service
account.
B. Enforce public access prevention for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine
C. Enforce signed URLs for the desired bucket. Create and update the Cloud Run service to use a user-managed service account. Grant the
Storage Object Viewer IAM role on the bucket to the service account.
D. Enforce public access prevention for the desired bucket. Create and update the Cloud Run service to use a user-managed service account.
Grant the Storage Object Viewer IAM role on the bucket to the service account.
Correct Answer: B
Selected Answer: D
This approach allows you to secure your Cloud Storage bucket by enforcing public access prevention, which prevents data from being accidentally
shared with the public. By creating and updating the Cloud Run service to use a user-managed service account, you can ensure that only this
service has access to the bucket. Granting the Storage Object Viewer IAM role on the bucket to the service account allows the service to read
objects stored in the bucket.
upvoted 1 times
D is right.
1) Create service account with role of viewing the objects under Cloud storage bucket
2) Create policies to prevent public access to the bucket.
Selected Answer: D
most secure and efficient way to give the application Identity and Access Management (IAM) permission to access the images in the bucket.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 456/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are using Cloud Run to host a global ecommerce web application. Your company’s design team is creating a new color scheme for the web
app. You have been tasked with determining whether the new color scheme will increase sales. You want to conduct testing on live production
A. Use an external HTTP(S) load balancer to route a predetermined percentage of traffic to two different color schemes of your application.
Analyze the results to determine whether there is a statistically significant difference in sales.
B. Use an external HTTP(S) load balancer to route traffic to the original color scheme while the new deployment is created and tested. After
testing is complete, reroute all traffic to the new color scheme. Analyze the results to determine whether there is a statistically significant
difference in sales.
C. Use an external HTTP(S) load balancer to mirror traffic to the new version of your application. Analyze the results to determine whether
D. Enable a feature flag that displays the new color scheme to half of all users. Monitor sales to see whether they increase for this group of
users.
Correct Answer: C
Selected Answer: A
Correct answer is A. This is classic A/B testing. Since you already have a new version, built into an image, all you need do is to use the load balancer
to split traffic going to old version and new version. See: https://cloud.google.com/load-balancing/docs/l7-internal/traffic-
management#traffic_actions_weight-based_traffic_splitting. Note that global load balancers can route to serverless services.
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless
upvoted 6 times
Selected Answer: A
Considering the importance of traffic analysis and the need for precise control over traffic distribution for a global ecommerce web application,
Option A is likely the better choice. This option allows for detailed monitoring and analysis of user interactions with different color schemes,
offering clear insights into which version performs better in terms of sales. The use of an external HTTP(S) load balancer for traffic routing provides
a more controlled environment for conducting such a study.
upvoted 1 times
A is correct.
upvoted 1 times
Selected Answer: A
A is right. D is specifying 50% of the users which is not correct. In really the traffic split is 80-20 or 75-25 ratio. This is a specialized version of canary
depolyments.
upvoted 1 times
This is the best way to test the new color scheme on live production traffic. By enabling a feature flag, you can display the new color scheme to a
subset of users while keeping the old color scheme for the rest of the users. This will allow you to compare sales between the two groups of users
and determine whether the new color scheme has a statistically significant impact on sales.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 457/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer at a large corporation. You manage three Google Kubernetes Engine clusters on Google Cloud. Your team’s developers need to
switch from one cluster to another regularly without losing access to their preferred development tools. You want to configure access to these
multiple clusters while following Google-recommended best practices. What should you do?
A. Ask the developers to use Cloud Shell and run gcloud container clusters get-credential to switch to another cluster.
B. In a configuration file, define the clusters, users, and contexts. Share the file with the developers and ask them to use kubect1 contig to add
C. Ask the developers to install the gcloud CLI on their workstation and run gcloud container clusters get-credentials to switch to another
cluster.
D. Ask the developers to open three terminals on their workstation and use kubect1 config to configure access to each cluster.
Correct Answer: C
Selected Answer: C
This approach allows developers to switch between different Google Kubernetes Engine clusters directly from their local workstation1. The gcloud
container clusters get-credentials command configures kubectl with the credentials of the specified cluster1, making it easy for developers to
switch contexts and interact with different clusters.
upvoted 1 times
Selected Answer: B
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Selected Answer: B
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
upvoted 1 times
Selected Answer: B
Option B is the best solution because it is secure, convenient, and time-efficient. By using a configuration file, you can define the clusters, users,
and contexts that you want to use. You can then share the file with the developers, who can use it to add the cluster, user, and context details to
their kubeconfig file. Once the developers have added the cluster, user, and context details to their kubeconfig file, they can switch to another
cluster by using the following command: kubectl config use <context-name>
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 458/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a lead developer working on a new retail system that runs on Cloud Run and Firestore. A web UI requirement is for the user to be able to
browse through all products. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances
are exceeding memory limits errors during busy times. This error coincides with spikes in the number of Firestore queries.
You need to prevent Cloud Run from crashing and decrease the number of Firestore queries. You want to use a solution that optimizes system
A. Modify the query that returns the product list using cursors with limits.
C. Modify the query that returns the product list using integer offsets.
Correct Answer: C
Selected Answer: A
A is correct.
upvoted 1 times
Selected Answer: A
A is Best. Using pagination with limit on the size. This is called Lazy loading of data.
upvoted 1 times
Selected Answer: A
A cursor is a pointer to a specific location in a Firestore database. By using cursors with limits, you can control the number of documents that are
returned in a query. This can help to reduce the number of Firestore queries that are made, which can improve performance and prevent Cloud
Run from crashing.
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 459/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer at a large organization. Your team uses Git for source code management (SCM). You want to ensure that your team follows
Google-recommended best practices to manage code to drive higher rates of software delivery. Which SCM process should your team use?
A. Each developer commits their code to the main branch before each product release, conducts testing, and rolls back if integration issues
are detected.
B. Each group of developers copies the repository, commits their changes to their repository, and merges their code into the main repository
C. Each developer creates a branch for their own work, commits their changes to their branch, and merges their code into the main branch
daily.
D. Each group of developers creates a feature branch from the main branch for their work, commits their changes to their branch, and merges
their code into the main branch after the change advisory board approves it.
Correct Answer: C
Selected Answer: D
D is correct.
upvoted 1 times
Selected Answer: D
B. Put all commits inside a single repo would place lots of burden on branching and commit history. This not only makes checkout slower but also
hard to manage - think about a small-mid size team (10 developers) creating 200 commits per day, roughly equals to 156,000 for 3 months. The
maintainers of a repo would have a hard time to chase down and merge 156,000 commits. Not to mention the repo permission management
would be a nightmare if the repo extends to enterprise level (200 - 500 developers / SRE / qa / SA) .Forking repo as opt B would be a better choice.
Each developer deals with his fork. PR merge and squash once ready. In fact, all famous OSS software, such as Kubernetes, Grafana, Prometheus,
etc., use this approach.
upvoted 1 times
Selected Answer: D
Use a centralized repository. A centralized repository is a single location where all of your team's code is stored. This makes it easy for everyone to
access the latest code, and it also helps to prevent conflicts.
Use branches. Branches are a way to create a separate version of the code for development purposes. This allows developers to work on new
features or bug fixes without affecting the main branch of the code.
upvoted 2 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 460/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and want to quickly test
Pub/Sub integration for each new build. How should you configure local testing?
A. Install Cloud Code on the integrated development environment (IDE). Navigate to Cloud APIs, and enable Pub/Sub against a valid Google
Project ID. When developing locally, configure your application to call pubsub.googleapis.com.
B. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your
application to use the local emulator with ${gcloud beta emulators pubsub env-init}.
C. In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API. When developing locally, configure your application to
call pubsub.googleapis.com.
D. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project IWhen developing locally, configure your
Correct Answer: A
Executing "gcloud beta emulators pubsub env-init" is required for local testing when the application and the emulator run either on the same
machine or on different machines. The export of the PUBSUB_EMULATOR_HOST variable is an additional step required only in the latter case (when
the application and the emulator run on different machines).
upvoted 1 times
Based on the common steps for implementing the Pub/Sub emulator, the best choice for configuring local testing of your web application's
Pub/Sub integration would be:
Option B: Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your
application to use the local emulator with ${gcloud beta emulators pubsub env-init}.
This option covers the essential steps for both scenarios (same machine or different machines) and provides a clear path for setting up and utilizing
the Pub/Sub emulator effectively for local development and testing.
upvoted 1 times
This approach allows you to test your application’s integration with Pub/Sub without making actual calls to the Pub/Sub service, which can be
time-consuming and may incur costs. Instead, your application interacts with the local emulator, which mimics the behavior of the actual Pub/Sub
service. This makes it a fast and cost-effective solution for local testing. Remember to set the PUBSUB_EMULATOR_HOST environment variable to
point your application to the local emulator.
upvoted 1 times
Selected Answer: B
https://cloud.google.com/pubsub/docs/emulator#automatically_setting_the_variables
If your application and the emulator run on the same machine, you can set the environment variables automatically with:
1) $(gcloud beta emulators pubsub env-init)
If your application and the emulator run on different machines, set the environment variables manually with:
1) Run the env-init command: gcloud beta emulators pubsub env-init
2) On the machine that runs your application, set the PUBSUB_EMULATOR_HOST
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 461/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Selected Answer: D
Answer: D
upvoted 1 times
Selected Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 462/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your ecommerce application receives external requests and forwards them to third-party API services for credit card processing, shipping, and
Your customers are reporting that your application is running slowly at unpredictable times. The application doesn’t report any metrics. You need
to determine the cause of the inconsistent performance. What should you do?
A. Install the OpenTelemetry library for your respective language, and instrument your application.
B. Install the Ops Agent inside your container and configure it to gather application metrics.
C. Modify your application to read and forward the X-Cloud-Trace-Context header when it calls the downstream services.
D. Enable Managed Service for Prometheus on the Google Kubernetes Engine cluster to gather application metrics.
Correct Answer: C
Selected Answer: A
The key part of the question prompting the use of OpenTelemetry over Prometheus is: "Your customers are reporting that your application is
running slowly at unpredictable times. The application doesn’t report any metrics." This indicates a need for detailed instrumentation to trace and
diagnose performance issues across various external service interactions. OpenTelemetry, with its comprehensive APIs and SDKs for collecting a
wide range of telemetry data (traces, metrics, logs), is well-suited for this task. It allows for tracing the application's workflow and identifying
bottlenecks, which is essential for understanding the root cause of the inconsistent performance.
upvoted 1 times
OpenTelemetry provides a single set of APIs, libraries, agents, and collector services to capture distributed traces and metrics from your application.
You can analyze them using Prometheus, Jaeger, and other observability tools. This will help you understand the performance of your application
and identify any bottlenecks or issues causing the slowdown. It’s a comprehensive tool for observability, making it a suitable choice for this
scenario.
upvoted 1 times
Selected Answer: A
I go with A. Opentelelmetry standard supports tracing, metrics, logging. With tracing we can find out what is causing the performance issue. Zipkin
is almost obsolete now.
upvoted 1 times
Selected Answer: A
OpenTelemetry is a set of APIs, libraries, and agents that help you collect telemetry data (such as traces, metrics, and logs) from your applications.
By instrumenting your application with OpenTelemetry, you can gather performance metrics, trace requests across different components, and
identify potential bottlenecks or issues causing inconsistent performance.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 463/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Managed Service for Prometheus collects metrics from Prometheus exporters and lets you query the data globally using PromQL, meaning that
you can keep using any existing Grafana dashboards, PromQL-based alerts, and workflows. It is hybrid- and multi-cloud compatible, can monitor
both Kubernetes and VM workloads, retains data for 24 months, and maintains portability by staying compatible with upstream Prometheus.
https://cloud.google.com/stackdriver/docs/managed-prometheus
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 464/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a new application. You want the application to be triggered only when a given file is updated in your Cloud Storage bucket.
Your trigger might change, so your process must support different types of triggers. You want the configuration to be simple so that multiple team
members can update the triggers in the future. What should you do?
A. Configure Cloud Storage events to be sent to Pub/Sub, and use Pub/Sub events to trigger a Cloud Build job that executes your application.
B. Create an Eventarc trigger that monitors your Cloud Storage bucket for a specific filename, and set the target as Cloud Run.
C. Configure a Cloud Function that executes your application and is triggered when an object is updated in Cloud Storage.
D. Configure a Firebase function that executes your application and is triggered when an object is updated in Cloud Storage.
Correct Answer: C
Selected Answer: C
https://cloud.google.com/functions/docs/calling
upvoted 1 times
Selected Answer: C
I change my answer to C, the choice of Option C ("Configure a Cloud Function that executes your application and is triggered when an object is
updated in Cloud Storage") is strongly supported by the sentence "You want the configuration to be simple so that multiple team members can
update the triggers in the future." Cloud Functions provide a more straightforward and user-friendly approach for setting up and managing
triggers, making it easier for various team members to work with and update the configuration as needed. This simplicity aligns well with the
requirement for an easily manageable and modifiable trigger process.
upvoted 1 times
Selected Answer: B
Given the requirement in the question for a process that supports different types of triggers and allows for easy updating by multiple team
members, Option B (Create an Eventarc trigger that monitors your Cloud Storage bucket for a specific filename, and set the target as Cloud Run)
seems more relevant.
This option offers greater flexibility for handling various types of triggers with Eventarc, which can be crucial if the application's triggering
requirements change over time. Eventarc's ability to integrate with multiple Google Cloud services and route events to Cloud Run provides a robust
and scalable solution for diverse event handling. This aligns well with the need for a versatile and easily updatable trigger mechanism.
upvoted 1 times
select B
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 465/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are defining your system tests for an application running in Cloud Run in a Google Cloud project. You need to create a testing environment
that is isolated from the production environment. You want to fully automate the creation of the testing environment with the least amount of
A. Using Cloud Build, execute Terraform scripts to create a new Google Cloud project and a Cloud Run instance of your application in the
B. Using Cloud Build, execute a Terraform script to deploy a new Cloud Run revision in the existing Google Cloud project. Use traffic splitting to
C. Using Cloud Build, execute gcloud commands to create a new Google Cloud project and a Cloud Run instance of your application in the
D. Using Cloud Build, execute gcloud commands to deploy a new Cloud Run revision in the existing Google Cloud project. Use traffic splitting
Correct Answer: C
Selected Answer: A
https://cloud.google.com/docs/terraform/best-practices-for-terraform
upvoted 1 times
Selected Answer: A
A is correct
upvoted 1 times
Selected Answer: A
A is correct
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 466/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a cluster administrator for Google Kubernetes Engine (GKE). Your organization’s clusters are enrolled in a release channel. You need to be
informed of relevant events that affect your GKE clusters, such as available upgrades and security bulletins. What should you do?
D. Create an RSS subscription to receive a daily summary of the GKE release notes.
Correct Answer: B
Selected Answer: A
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-notifications
upvoted 2 times
Selected Answer: D
https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels#channels
upvoted 3 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 467/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are tasked with using C++ to build and deploy a microservice for an application hosted on Google Cloud. The code needs to be containerized
and use several custom software libraries that your team has built. You do not want to maintain the underlying infrastructure of the application.
B. Use Cloud Build to create the container, and deploy it on Cloud Run.
C. Use Cloud Shell to containerize your microservice, and deploy it on a Container-Optimized OS Compute Engine instance.
D. Use Cloud Shell to containerize your microservice, and deploy it on standard Google Kubernetes Engine.
Correct Answer: D
Selected Answer: B
It's B
upvoted 1 times
Selected Answer: B
For me is B
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 468/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You need to containerize a web application that will be hosted on Google Cloud behind a global load balancer with SSL certificates. You don’t have
the time to develop authentication at the application level, and you want to offload SSL encryption and management from your application. You
want to configure the architecture using managed services where possible. What should you do?
A. Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress Controller to handle authentication.
B. Host the application on Google Kubernetes Engine, and deploy cert-manager to manage SSL certificates.
C. Host the application on Compute Engine, and configure Cloud Endpoints for your application.
D. Host the application on Google Kubernetes Engine, and use Identity-Aware Proxy (IAP) with Cloud Load Balancing and Google-managed
certificates.
Correct Answer: B
Selected Answer: D
IAP provides a way to control access to applications running on GCP without the need for traditional VPNs. It works by verifying a user’s identity
and determining if that user should be allowed access to the application. This is especially useful since you do not have the time to develop
authentication at the application level. IAP can handle this for you.
upvoted 1 times
Selected Answer: D
It should be D
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 469/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You manage a system that runs on stateless Compute Engine VMs and Cloud Run instances. Cloud Run is connected to a VPC, and the ingress
setting is set to Internal. You want to schedule tasks on Cloud Run. You create a service account and grant it the roles/run.invoker Identity and
Access Management (IAM) role. When you create a schedule and test it, a 403 Permission Denied error is returned in Cloud Logging. What should
you do?
B. Configure a cron job on the Compute Engine VMs to trigger Cloud Run on schedule.
C. Change the Cloud Run ingress setting to 'Internal and Cloud Load Balancing.'
Correct Answer: A
Selected Answer: D
Cloud Scheduler can trigger Cloud Run services, but in this case, where the ingress is set to 'Internal', direct invocation might not work. Instead, you
can use Cloud Scheduler in combination with Pub/Sub. Cloud Scheduler can create a Pub/Sub message on a schedule, and this Pub/Sub message
can then trigger the Cloud Run service. This approach is commonly used for invoking services with restricted network access.
upvoted 1 times
Selected Answer: D
Cloud Scheduler can trigger Cloud Run services, but in this case, where the ingress is set to 'Internal', direct invocation might not work. Instead, you
can use Cloud Scheduler in combination with Pub/Sub. Cloud Scheduler can create a Pub/Sub message on a schedule, and this Pub/Sub message
can then trigger the Cloud Run service. This approach is commonly used for invoking services with restricted network access.
upvoted 1 times
Selected Answer: D
D. When setting PubSub subscription, use type push and use the service account with the invoker role as authentication. A. no need more
permissions. B. it could work id the vms are in the same VPC, but it is not best practice. C. That setting is only for connecting to load balancer
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 470/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You work on an application that relies on Cloud Spanner as its main datastore. New application features have occasionally caused performance
regressions. You want to prevent performance issues by running an automated performance test with Cloud Build for each commit made. If
multiple commits are made at the same time, the tests might run concurrently. What should you do?
A. Create a new project with a random name for every build. Load the required data. Delete the project after the test is run.
B. Create a new Cloud Spanner instance for every build. Load the required data. Delete the Cloud Spanner instance after the test is run.
C. Create a project with a Cloud Spanner instance and the required data. Adjust the Cloud Build build file to automatically restore the data to
D. Start the Cloud Spanner emulator locally. Load the required data. Shut down the emulator after the test is run.
Correct Answer: B
Selected Answer: B
Since the testing needs to accommodate scenarios where multiple commits are made simultaneously, and hence multiple tests might run
concurrently, the testing environment should support isolated and independent testing instances to avoid interference among tests.
Given these requirements, using the Cloud Spanner emulator would not be the best choice for this scenario. The emulator is primarily suited for
local development, unit, and integration testing, and is not built for production-scale performance testing. It may not accurately replicate
performance characteristics at scale or under load, which are crucial aspects in this case.
upvoted 1 times
Selected Answer: B
D. https://cloud.google.com/sdk/gcloud/reference/beta/emulators/spanner Use an emulator if possible when testing. B. It could work, but in real
world, spanning an cluster of cloud spanner takes a lot of time, and also money
upvoted 2 times
Isolation of Test Environments: Creating a new Cloud Spanner instance for each build ensures complete isolation of test environments. This is
crucial for accurate performance testing, as it avoids any interference from other concurrent test runs.
Consistency and Accuracy of Data: By loading the required data into each new instance, you ensure that every test starts with a consistent dataset,
which is essential for reliable and repeatable performance testing.
Resource Management: Automatically deleting the Cloud Spanner instance after each test helps manage resources and costs effectively. It ensures
that you are only using and paying for resources during the duration of the test.
Parallel Testing: This approach supports concurrent testing for multiple commits, as each test run has its own dedicated Cloud Spanner instance.
upvoted 2 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 471/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your company's security team uses Identity and Access Management (IAM) to track which users have access to which resources. You need to
create a version control system that can integrate with your security team's processes. You want your solution to support fast release cycles and
frequent merges to your main branch to minimize merge conflicts. What should you do?
C. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use trunk-based development.
D. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use feature-based development.
Correct Answer: C
Selected Answer: C
"You want your solution to support fast release cycles and frequent merges to your main branch to minimize merge conflicts."
The requirement for fast release cycles and frequent merges with minimal merge conflicts aligns well with trunk-based development. In trunk-
based development, developers work in short-lived branches and merge their changes frequently into the main branch, which helps in reducing
merge conflicts and supports a more rapid and continuous release cycle.
upvoted 1 times
C. using an external git repository as Github or Bitbucket synced with Cloud Repositories is a best practice, as it provides with a lot more features as
PRs and branch permissions. Trunk or master development is okay in this case as is a small project with fast development. For more large projects,
larger teams is better to use a feature development as GitFlow
upvoted 1 times
GitHub Repository: GitHub is a popular and powerful platform for version control. It supports a wide range of development workflows and
integrates well with various CI/CD tools, making it suitable for fast release cycles.
Mirroring to Cloud Source Repositories: By mirroring the GitHub repository to Google Cloud Source Repositories, you can leverage Google Cloud's
IAM features for access control and security. This integration allows your security team to track and manage user access effectively within the
Google Cloud environment.
Trunk-Based Development: This development methodology involves developers merging their changes into the main branch frequently. It's well-
suited for fast-paced development environments, as it minimizes the duration of branches and reduces the likelihood of significant merge conflicts.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 472/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently developed an application that monitors a large number of stock prices. You need to configure Pub/Sub to receive messages and
update the current stock price in an in-memory database. A downstream service needs the most up-to-date prices in the in-memory database to
perform stock trading transactions. Each message contains three pieces or information:
• Stock symbol
• Stock price
B. Create a pull subscription with both ordering and exactly-once delivery turned off.
C. Create a pull subscription with ordering enabled, using the stock symbol as the ordering key.
D. Create a push subscription with both ordering and exactly-once delivery turned off.
Correct Answer: A
1. The downstream application needs only the most up-to-date value for a stock price. There's no need of historical values from a time series, so
"ordering" does not make any sense in this scenario. This eliminates alternatives B, C and D. In addition, in alternative C, "using the stock symbol as
the ordering key" has no practical effect, once "ordering" is not necessary.
2. About "push" and "pull": in a "push" subscription, whenever the topic is fed with a new value, it will keep pushing it to the application until an
acknowledgement is received. Latency is lower in this case. In a "pull" subscription, there's an additional burden on the application to keep pulling
from the topic. This increases latency. A "push" subscription is recommended in such scenarios.
upvoted 1 times
Selected Answer: C
Pull Subscription for Controlled Processing: A pull subscription gives you control over when and how messages are processed. This can be
particularly important for maintaining the integrity of the in-memory database, as it allows for more deliberate handling of message backlogs and
peak loads.
Message Ordering Is Crucial: The ordering of stock price updates is critical. Using the stock symbol as the ordering key ensures that updates for a
specific stock are processed in the order they were sent. This is vital to ensure the accuracy of stock price data, as prices must be updated in the
sequence they were received to reflect the true market conditions.
No Need for Exactly-Once Delivery: In most financial data scenarios, the latest data supersedes the old. If a message is delivered more than once,
the last update for a given timestamp will leave the database in the correct state. Therefore, exactly-once delivery, which can add complexity and
overhead, might not be necessary.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 473/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are a developer at a social media company. The company runs their social media website on-premises and uses MySQL as a backend to store
user profiles and user posts. Your company plans to migrate to Google Cloud, and your learn will migrate user profile information to Firestore. You
are tasked with designing the Firestore collections. What should you do?
A. Create one root collection for user profiles, and create one root collection for user posts.
B. Create one root collection for user profiles, and create one subcollection for each user's posts.
C. Create one root collection for user profiles, and store each user's post as a nested list in the user profile document.
D. Create one root collection for user posts, and create one subcollection for each user's profile.
Correct Answer: B
Your team recently deployed an application on Google Kubernetes Engine (GKE). You are monitoring your application and want to be alerted when
the average memory consumption of your containers is under 20% or above 80%. How should you configure the alerts?
A. Create a Cloud Function that consumes the Monitoring API. Create a schedule to trigger the Cloud Function hourly and alert you if the
B. In Cloud Monitoring, create an alerting policy to notify you if the average memory consumption is outside the defined range.
C. Create a Cloud Function that runs on a schedule, executes kubectl top on all the workloads on the cluster, and sends an email alert if the
D. Write a script that pulls the memory consumption of the instance at the OS level and sends an email alert if the average memory
Correct Answer: D
Selected Answer: B
Cloud Monitoring provides a user-friendly interface to create complex alerting policies. You can set up thresholds for specific metrics, like memory
consumption, and receive notifications if these thresholds are exceeded or undercut. This feature negates the need for custom scripts or functions
to monitor these metrics.
upvoted 1 times
Selected Answer: B
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 474/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You manage a microservice-based ecommerce platform on Google Cloud that sends confirmation emails to a third-party email service provider
using a Cloud Function. Your company just launched a marketing campaign, and some customers are reporting that they have not received order
confirmation emails. You discover that the services triggering the Cloud Function are receiving HTTP 500 errors. You need to change the way
emails are handled to minimize email loss. What should you do?
B. Configure the sender application to publish the outgoing emails in a message to a Pub/Sub topic. Update the Cloud Function configuration
C. Configure the sender application to write emails to Memorystore and then trigger the Cloud Function. When the function is triggered, it
reads the email details from Memorystore and sends them to the email service.
D. Configure the sender application to retry the execution of the Cloud Function every one second if a request fails.
Correct Answer: C
Selected Answer: B
This is a robust and scalable approach. By decoupling the email sending process using Pub/Sub, you introduce a queueing mechanism. This
ensures that even if the Cloud Function encounters an issue, the email messages are not lost but remain in the queue. Additionally, Pub/Sub can
handle high throughput and provides retry mechanisms.
upvoted 1 times
Selected Answer: B
Selected Answer: B
B. With pub sub you can scale the load of sending emails to the Cloud Function. Also can configure exponential backoff if errors arise in the third
party service and ensure the email is delivered
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 475/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and need to quickly test
Pub/Sub integration for each new build. How should you configure local testing?
A. In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API. When developing locally configure your application to
call pubsub.googleapis.com.
B. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your
C. Run the gcloud config set api_endpoint_overrides/pubsub https://pubsubemulator.googleapis.com.com/ command to change the Pub/Sub
D. Install Cloud Code on the integrated development environment (IDE). Navigate to Cloud APIs, and enable Pub/Sub against a valid Google
Correct Answer: B
Selected Answer: B
For local testing of Pub/Sub integration, the most suitable option is B. You’d install the Pub/Sub emulator via gcloud, initiate it with a valid Google
Project ID, and configure your application to utilize the local emulator by setting the PUBSUB_EMULATOR_HOST variable. This method replicates
the Pub/Sub environment locally for efficient testing.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 476/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You recently developed an application that monitors a large number of stock prices. You need to configure Pub/Sub to receive a high volume
messages and update the current stock price in a single large in-memory database. A downstream service needs the most up-to-date prices in the
in-memory database to perform stock trading transactions. Each message contains three pieces or information:
• Stock symbol
• Stock price
B. Create a push subscription with both ordering and exactly-once delivery turned off.
D. Create a pull subscription with both ordering and exactly-once delivery turned off.
Correct Answer: B
1. The downstream application needs only the most up-to-date value for a stock price. There's no need of historical values from a time series, so
"ordering" does not make any sense in this scenario. This eliminates alternatives B and D.
2. Next choice is between "push" and "pull". In a "push" subscription, whenever the topic is fed with a new value, it will keep pushing it to the
application until an acknowledgement is received. Latency is lower in this case. In a "pull" subscription, there's an additional burden on the
application to keep pulling from the topic. This increases latency. A "push" subscription is recommended in such scenarios.
upvoted 1 times
Selected Answer: A
This setup provides the necessary guarantees for message delivery without duplication, and the pull model allows your service to manage message
consumption at a pace that ensures the integrity and timeliness of updates in your in-memory database.
upvoted 1 times
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 477/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
Your team has created an application that is hosted on a Google Kubemetes Engine (GKE) cluster. You need to connect the application to a legacy
REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the legacy service in a way that
is resilient and requires the fewest number of steps. You also want to be able to run probe-based health checks on the legacy service on a
separate port. How should you set up the connection? (Choose two.)
A. Use Traffic Director with a sidecar proxy to connect the application to the service.
C. Configure the legacy service's firewall to allow health checks originating from the sidecar proxy.
D. Configure the legacy service's firewall to allow health checks originating from the application.
E. Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
Correct Answer: AC
Selected Answer: AC
1.Use Traffic Director with a sidecar proxy (Option A):This enables reliable communication between your application and the legacy service. The
sidecar proxy can manage traffic routing, load balancing, and resilience.
2. Configure the legacy service's firewall to allow health checks originating from the sidecar proxy (Option C):By allowing health checks from the
sidecar proxy, you ensure that the health checks, which are necessary for ensuring service availability, are permitted by the firewall.
upvoted 1 times
You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory
utilization. You need to determine which function is consuming the most CPU and memory resources. What should you do?
A. Add print commands to the application source code to log when each function is called, and redeploy the application.
B. Create a Cloud Logging query that gathers the web application s logs. Write a Python script that calculates the difference between the
timestamps from the beginning and the end of the application's longest functions to identify time-intensive functions.
C. Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your
application on the Trace overview page, and identify which functions cause the most latency.
D. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google
Correct Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 478/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
You are developing a flower ordering application. Currently you have three microservices:
You need to determine how the services will communicate with each other. You want incoming orders to be processed quickly and you need to
collect order information for fulfillment. You also want to make sure orders are not lost between your services and are able to communicate
A.
B.
C.
D.
Correct Answer: D
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 479/480
12/7/23, 2:54 PM Professional Cloud Developer Exam – Free Actual Q&As, Page 1 | ExamTopics
https://www.examtopics.com/exams/google/professional-cloud-developer/custom-view/ 480/480