Professional Cloud Architect - D2cce566118d
Professional Cloud Architect - D2cce566118d
A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer
Answer: D
Explanation:
D is the answer because HTTP(S) load balancer can direct traffic reaching a single IP to different backends
based on the incoming URL. A is not correct because configuring a new load balancer would require a new or
different SSL and DNS records which conflicts with the requirements to keep the same SSL and DNS records.
B is not correct because it goes against the requirements. The company wants to keep the old API available
while new customers and testers try the new API. C is not correct because it is not a requirement to
decommission the implementation behind the old API. Moreover, it introduces unnecessary risk in case bugs
or incompatibilities are discovered in the new API.
Question: 2 CertyIQ
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day.
Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?
Answer: A
Explanation:
BigQuery is Google's serverless, highly scalable, low cost enterprise data warehouse designed to make all
your data analysts productive. Because there is no infrastructure to manage, you can focus on analyzing data
to find meaningful insights using familiar SQL and you don't need a database administrator.
BigQuery enables you to analyze all your data by creating a logical data warehouse over managed, columnar
storage as well as data from object storage, and spreadsheets.
Reference:
https://cloud.google.com/bigquery/
Question: 3 CertyIQ
The operations manager asks you for a list of recommended practices that she should consider when migrating a
J2EE application to the cloud.
Which three practices should you recommend? (Choose three.)
A. Port the application code to run on Google App Engine
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
Answer: CDE
Explanation:
Porting a J2EE application to App Engine will not work as its is - there are three arpproach for migration -
Question: 4 CertyIQ
A news feed web service has the following code running on Google App Engine. During peak load, users report that
they can see news articles they already viewed.
What is the most likely cause of this problem?
A. The session variable is local to just a single instance
B. The session variable is being overwritten in Cloud Datastore
C. The URL of the API needs to be modified to prevent caching
D. The HTTP Expires header needs to be set to -1 stop caching
Answer: A
Explanation:
It's A. AppEngine spins up new containers automatically according to the load. During peak traffic, HTTP
requests originated by the same user could be served by different containers. Given that the variable
`sessions` is recreated for each container, it might store different data.
The problem here is that this Flask app is stateful. The `sessions` variable is the state of this app. And stateful
variables in AppEngine / Cloud Run / Cloud Functions are problematic.
A solution would be to store the session in some database (e.g. Firestore, Memorystore) and retrieve it from
there. This way the app would fetch the session from a single place and would be stateless.
Question: 5 CertyIQ
An application development team believes their current logging tool will not meet their needs for their new cloud-
based product. They want a better tool to capture errors and help them analyze their historical log data. You want
to help them find a solution that meets their needs.
What should you do?
A. Direct them to download and install the Google StackDriver logging agent
B. Send them a list of online resources about logging best practices
C. Help them define their requirements and assess viable logging tools
D. Help them upgrade their current tool to take advantage of any new features
Answer: C
Explanation:
Never impose tools for customers, It is not Good Professional Practice. Indeed, there deploying to GCP, but
You need to understand the Monitoring and Logging Requirements to be effective in your proposal.
Question: 6 CertyIQ
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company's
web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)
Answer: AC
Explanation:
But, ATENTION...the model is called BLUE GREEN DEPLOYMENT no GREEN BLUE in the A option.
Question: 7 CertyIQ
To reduce costs, the Director of Engineering has required all developers to move their development infrastructure
resources from on-premises virtual machines
(VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require
state to persist. You have been asked to design the process of running a development environment in Google Cloud
while providing cost visibility to the finance department.
Which two steps should you take? (Choose two.)
A. Use the - -no-auto-delete flag on all persistent disks and stop the VM
B. Use the - -auto-delete flag on all persistent disks and terminate the VM
C. Apply VM CPU utilization label and include it in the BigQuery billing export
D. Use Google BigQuery billing export and labels to associate cost to groups
E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM
F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM
Answer: AD
Explanation:
A is correct because persistent disks will not be deleted when an instance is stopped.
D is correct because exporting daily usage and cost estimates automatically throughout the day to a
BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to
group the costs based on team or cost center.
Question: 8 CertyIQ
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting.
There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that
reports its status every second. The data from the motion detector includes only a sensor ID and several different
discrete items of information. Analysts will use this data, together with information about account owners and
office locations.
Which database type should you use?
A. Flat file
B. NoSQL
C. Relational
D. Blobstore
Answer: B
Explanation:
Relational databases were not designed to cope with the scale and agility challenges that face modern
applications, nor were they built to take advantage of the commodity storage and processing power available
today.
NoSQL fits well for:
✑ Developers are working with applications that create massive volumes of new, rapidly changing data types "
structured, semi-structured, unstructured and polymorphic data.
Incorrect Answers:
D: The Blobstore API allows your application to serve data objects, called blobs, that are much larger than the
size allowed for objects in the Datastore service.
Blobs are useful for serving large files, such as video or image files, and for allowing users to upload large
data files.
Reference:
https://www.mongodb.com/nosql-explained
Question: 9 CertyIQ
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the
instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances
are being terminated and re-launched every minute. The instances do not have a public IP address.
You have verified the appropriate web response is coming from each instance using the curl command. You want to
ensure the backend is configured correctly.
What should you do?
A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the
instance public IP.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance
group.
D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of
the load balancer as the source and the instance tag as the destination.
Answer: C
Explanation:
The best practice when configuration a health check is to check health and serve traffic on the same port.
However, it is possible to perform health checks on one port, but serve traffic on another. If you do use two
different ports, ensure that firewall rules and services running on instances are configured appropriately. If
you run health checks and serve traffic on the same port, but decide to switch ports at some point, be sure to
update both the backend service and the health check.
Backend services that do not have a valid global forwarding rule referencing it will not be health checked and
will have no health status.
Reference:
https://cloud.google.com/compute/docs/load-balancing/http/backend-service
Question: 10 CertyIQ
You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The
script is printing errors that it cannot connect to
BigQuery.
What should you do to fix the script?
Answer: C
Explanation:
1. C - Service accounts with limited access are a best practice.The use of Access scopes (Option B) is only
recommended when using default service accounts, which is not a good practice recommendation either.
2. C. Create a new service account with BigQuery access and execute your script with that userService
account is always a preferred option.
Question: 11 CertyIQ
Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data
center. The business owners require minimal user disruption. There are strict security team requirements for
storing passwords.
What authentication strategy should they use?
A. Use G Suite Password Sync to replicate passwords into Google
B. Federate authentication via SAML 2.0 to the existing Identity Provider
C. Provision users in Google using the Google Cloud Directory Sync tool
D. Ask users to set their Google password to match their corporate password
Answer: B
Explanation:
B no brainer, this method makes the AD or Azure ADFS whatever your identity provide is the single source of
truth and doesn't not sync passwords.
Question: 12 CertyIQ
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize
operations. They do not have any existing code for this analysis, so they are exploring all their options. These
options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing
some data as it comes in.
Which technology should they use for this?
Answer: B
Explanation:
Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch
(historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises
needed.
Reference:
https://cloud.google.com/dataflow/
Question: 13 CertyIQ
Your customer is receiving reports that their recently updated Google App Engine application is taking
approximately 30 seconds to load for some of their users.
This behavior was not reported before the update.
What strategy should you take?
Explanation:
Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from
Google Cloud Platform and Amazon Web Services
(AWS). Our API also allows ingestion of any custom log data from any source. Stackdriver Logging is a fully
managed service that performs at scale and can ingest application and system log data from thousands of
VMs. Even better, you can analyze all that log data in real time.
Reference:
https://cloud.google.com/logging/
Question: 14 CertyIQ
A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data
files. The database is about to run out of storage space.
How can you remediate the problem with the least amount of downtime?
A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in
Linux.
B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then
restart the virtual machine
C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to
use with the fdisk command in Linux
D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and
mount it, and configure the database service to move the files to the new disk
E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger
disk, unmount the old disk, mount the new disk and restart the database service
Answer: A
Explanation:
On Linux instances, connect to your instance and manually resize your partitions and file systems to use the
additional disk space that you added.
Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk,
specify the partition. If your disk does not have a partition table, specify only the disk ID. sudo resize2fs
/dev/[DISK_ID][PARTITION_NUMBER] where [DISK_ID] is the device name and [PARTITION_NUMBER] is the
partition number for the device where you are resizing the file system.
Reference:
https://cloud.google.com/compute/docs/disks/add-persistent-disk
Question: 15 CertyIQ
Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry
(PCI) compliance without compromising the ability to analyze transactional data and trends relating to which
payment methods are used.
How should you design your architecture?
Answer: A
Explanation:
Reference:
https://www.sans.org/reading-room/whitepapers/compliance/ways-reduce-pci-dss-audit-scope-tokenizing-
cardholder-data-33194
Question: 16 CertyIQ
You have been asked to select the storage system for the click-data of your company's large portfolio of websites.
This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With
bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user
experience teams.
Which storage infrastructure should you choose?
Answer: B
Explanation:
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both
real-time access and analytics workloads.
Good for:
✑ Low-latency read/write access
✑ High-throughput analytics
✑ Native time series support
Common workloads:
✑ IoT, finance, adtech
✑ Personalization, recommendations
✑ Monitoring
✑ Geospatial datasets
✑ Graphs
Incorrect Answers:
C: Google Cloud Storage is a scalable, fully-managed, highly reliable, and cost-efficient object / blob store.
Is good for:
✑ Images, pictures, and videos
✑ Objects and blobs
✑ Unstructured data
D: Google Cloud Datastore is a scalable, fully-managed NoSQL document database for your web and mobile
applications.
Is good for:
✑ Semi-structured application data
✑ Hierarchical data
✑ Durable key-value data
✑ Common workloads:
✑ User profiles
✑ Product catalogs
✑ Game state
Reference:
https://cloud.google.com/storage-options/
Question: 17 CertyIQ
You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You
want to optimize ongoing Cloud Storage spend.
What should you do?
A. Write a lifecycle management rule in XML and push it to the bucket with gsutil
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil
C. Schedule a cron script using gsutil ls "lr gs://backups/** to find and remove items older than 90 days
D. Schedule a cron script using gsutil ls "l gs://backups/** to find and remove items older than 90 days and
schedule it with cron
Answer: B
Explanation:
All four are correct answers. Google has built in cron job schduling with Cloud Schedule, so that would place
"D" behind "C" in Google's perspective. Google also has it's own lifecycle management command line prompt
gcloud lifecycle so "A" or "B" could be used. JSON is slightly faster than XML because of the " " verse "<c>"
distinguisher, with a Trie tree used for alphanumeric parsing. So between "A" and "B", choose "B". Between
"B" and "A", "B" is slightly more efficient from the GCP operator perspective. So choose "B".
Question: 18 CertyIQ
Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run
on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least
amount of operations work and code change.
Which product should you use?
Answer: B
Explanation:
Google Cloud Dataproc is a fast, easy-to-use, low-cost and fully managed service that lets you run the Apache
Spark and Apache Hadoop ecosystem on Google
Cloud Platform. Cloud Dataproc provisions big or small clusters rapidly, supports many popular job types, and
is integrated with other Google Cloud Platform services, such as Google Cloud Storage and Stackdriver
Logging, thus helping you reduce TCO.
Reference:
https://cloud.google.com/dataproc/docs/resources/faq
Question: 19 CertyIQ
The database administration team has asked you to help them improve the performance of their new database
server running on Google Compute Engine. The database is for importing and normalizing their performance
statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB
of SSD persistent disk.
What should they change to get better performance from this system?
Answer: C
Explanation:
Answer is C because persistent disk performance is based on the total persistent disk capacity attached to an
instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will
increment its throughput and IOPS, which in turn improve the performance of MySQL.
Question: 20 CertyIQ
You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes
from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.
Where should you store the data?
A. Google BigQuery
B. Google Cloud SQL
C. Google Cloud Bigtable
D. Google Cloud Storage
Answer: C
Explanation:
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both
real-time access and analytics workloads.
Good for:
✑ Low-latency read/write access
✑ High-throughput analytics
✑ Native time series support
Common workloads:
✑ IoT, finance, adtech
✑ Personalization, recommendations
✑ Monitoring
✑ Geospatial datasets
✑ Graphs
Reference:
https://cloud.google.com/storage-options/
Question: 21 CertyIQ
Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is
deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the
database. Currently, only a small group of select customers have access to the portal. The portal meets a
99,99% availability SLA under these conditions. However next quarter, your company will be making the portal
available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure
the system maintains the SLA once they introduce additional user load.
What should you do?
A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the
same time, terminate all resources in one of the zones
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one
layer, and introduce chaos to the system by terminating random resources on both zones
C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is
triggered on all layers. At the same time, terminate random resources on both zones
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also,
derive estimated number of users based on existing user's usage of the app, and deploy enough resources to
handle 200% of expected load
Answer: B
Explanation:
B caters for terminating the service in both zones randomly. You want to be able to test resiliency when either
zone has an outage.
Question: 22 CertyIQ
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile
below. They report that their application deployments are taking too long.
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app's
functionality.
Which two actions should you take? (Choose two.)
Answer: CE
Explanation:
The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of
the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.
Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient
than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to
disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large
selection of packages from the repository.
Reference:
https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/abo
ut/
Question: 23 CertyIQ
Your solution is producing performance bugs in production that you did not see in staging and test environments.
You want to adjust your test and deployment procedures to avoid this problem in the future.
What should you do?
Answer: C
Explanation:
C - question states it is a performance problem Therefore load testing will expose such issues in the test and
staging env
C. Increase the load on your test and staging environments - The whole purpose is to test a production-like
load....
Question: 24 CertyIQ
A small number of API requests to your microservices-based application take a very long time. You know that each
request to the API can traverse many services.
You want to know which service takes the longest in those cases.
What should you do?
A. Set timeouts on your application so that you can fail requests faster
B. Send custom metrics for each of your requests to Stackdriver Monitoring
C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each
microservice
Answer: D
Explanation:
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each
microservice
Reference:
https://cloud.google.com/trace/docs/quickstart#find_a_trace
Question: 25 CertyIQ
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted
to a master. You want to avoid this in the future.
What should you do?
Answer: D
Explanation:
A -> It makes no sense, you don't change your DBA software because it's misconfigured
B-> It will eventually gives you more time to fix the problem, but you don't fix "the replica is never promoted to
master"
D-> By implementing a regular failover you will have to fix the problem + doing a regular failover is a good
practice
Answer is D
Question: 26 CertyIQ
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible
legal proceedings.
Which approach should you use?
Answer: D
Explanation:
The answer is D because it is not required to the analysis straightaway or even if it actually needs to be done.
The main requirement is that it needs to be stored for compliance purposes and IF NEED BE for analytics as
well.
Question: 27 CertyIQ
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL
database on Google Cloud Platform. The database is 4
TB, and large updates are frequent. Replication requires private address space communication.
Which networking approach should you use?
Answer: A
Explanation:
Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication
between your on-premises network and Google's network. Dedicated Interconnect enables you to transfer
large amounts of data between networks, which can be more cost effective than purchasing additional
bandwidth over the public Internet or using VPN tunnels.
Benefits:
✑ Traffic between your on-premises network and your VPC network doesn't traverse the public Internet.
Traffic traverses a dedicated connection with fewer hops, meaning there are less points of failure where
traffic might get dropped or disrupted.
✑ Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network.
You don't need to use a NAT device or VPN tunnel to reach internal IP addresses. Currently, you can only
reach internal IP addresses over a dedicated connection. To reach Google external IP addresses, you must use
a separate connection.
✑ You can scale your connection to Google based on your needs. Connection capacity is delivered over one or
more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per interconnect).
✑ The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated
connection is generally the least expensive method if you have a high-volume of traffic to and from Google's
network.
Reference:
https://cloud.google.com/interconnect/docs/details/dedicated
Question: 28 CertyIQ
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management
(Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit
process.
What should you do?
A. Create custom Google Stackdriver alerts and send them to the auditor
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's
view
D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the
bucket
Answer: B
Explanation:
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the
auditor. B is a neater solution as they are looking to streamline and expedite the audit process compared to D
thats is a cheaper solution and not as neat.
B: https://cloud.google.com/iam/docs/job-functions/auditing#scenario_external_auditors
Question: 29 CertyIQ
You are designing a large distributed application with 30 microservices. Each of your distributed microservices
needs to connect to a database back-end. You want to store the credentials securely.
Where should you store the credentials?
Answer: C
Explanation:
C is the answer, since key management systems generate, use, rotate, encrypt, and destroy cryptographic
keys and manage permissions to those keys.
A is incorrect because storing credentials in source code and source control is discoverable, in plain text, by
anyone with access to the source code. This also introduces the requirement to update code and do a
deployment each time the credentials are rotated. B is not correct because consistently populating
environment variables would require the credentials to be available, in plain text, when the session is started.
D is incorrect because instead of managing access to the config file and updating manually as keys are
rotated, it would be better to leverage a key management system. Additionally, there is increased risk if the
config file contains the credentials in plain text.
Reference:
https://cloud.google.com/kms/docs/secret-management
Question: 30 CertyIQ
A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate
the custom tool to the new cloud environment.
You want to advocate for the adoption of Google Cloud Deployment Manager.
What are two business risks of migrating to Cloud Deployment Manager? (Choose two.)
Answer: EF
Explanation:
Question: 31 CertyIQ
A development manager is building a new application. He asks you to review his requirements and identify what
cloud technologies he can use to meet them. The application must:
1. Be based on open-source technology for cloud portability
2. Dynamically scale compute capacity based on demand
3. Support continuous software delivery
4. Run multiple segregated copies of the same application stack
5. Deploy application bundles using dynamic templates
6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?
Answer: A
Explanation:
A. Google Kubernetes Engine, Jenkins, and Helm. This is a better answer than D because - Load Balancing is
already available for Kubernetes (The Kubernetes load balancer works by sending connections to the first
server in the pool until its capacity is reached)- Helm is required for managing Kubernetes packages -
install/deploy/manage/etc.
Question: 32 CertyIQ
You have created several pre-emptible Linux virtual machine instances using Google Compute Engine. You want to
properly shut down your application before the virtual machines are preempted.
What should you do?
Answer: C
Explanation:
A startup script, or a shutdown script, is specified through the metadata server, using startup script metadata
keys.
Reference:
https://cloud.google.com/compute/docs/startupscript
Question: 33 CertyIQ
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier
(web, API, and database) scales independently of the others. Network traffic should flow through the web to the
API tier and then on to the database tier. Traffic should not flow between the web and the database tier.
How should you configure the network?
Answer: D
Explanation:
Google Cloud Platform(GCP) enforces firewall rules through rules and tags. GCP rules and tags can be
defined once and used across all regions.
Reference:
https://cloud.google.com/docs/compare/openstack/
https://aws.amazon.com/it/blogs/aws/building-three-tier-architectures-with-security-groups/
Question: 34 CertyIQ
Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine
(GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the
batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the
development team.
Which three actions should you take? (Choose three.)
Answer: ACE
Explanation:
A. Use Stackdriver Logging to search for the module log entries = Check logs
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs = Check grub messages,
remember new kernel module was installed.
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics =
Zoom into the time window when problem happened.
Question: 35 CertyIQ
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data
to the cloud and test the analytics features available to them there, while also retaining that data as a long-term
disaster recovery backup.
Which two steps should you take? (Choose two.)
Answer: AE
Explanation:
Answer is A as they want to load logs for analytics and E for storing data in buckets for long term
Question: 36 CertyIQ
You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self-
healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it, and
investigation could take up to a week.
What should you do?
Answer: B
Explanation:
B. Revert the source code change and rerun the deployment pipeline
>> This revert will be logged in the source repo. Will go with this way although D also is correct.
C. login to the servers with the bad code change, and swap in the previous code
>> C is manually doing what can be automatically done by B and C, hence eliminate.
D. Change the instance group template to the previous one and delete all instances
>> This is similar to B but why manually do something which is automated. Hence eliminate. But is also correct.
But B is better from code lifecycle perspective.
Hence B
Question: 37 CertyIQ
Your organization wants to control IAM policies for different departments independently, but centrally.
Which approach should you take?
Answer: C
Explanation:
Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders, or a
combination of both. You can use folders to group projects under an organization in a hierarchy. For example,
your organization might contain multiple departments, each with its own set of GCP resources. Folders allow
you to group these resources on a per-department basis. Folders are used to group resources that share
common IAM policies. While a folder can contain multiple folders or resources, a given folder or resource can
have exactly one parent.
Reference:
https://cloud.google.com/resource-manager/docs/creating-managing-folders
Question: 38 CertyIQ
You deploy your custom Java application to Google App Engine. It fails to deploy and gives you the following stack
trace.
What should you do?
A. Upload missing JAR files and redeploy your application.
B. Digitally sign all of your JAR files and redeploy your application
C. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
Answer: B
Explanation:
B. Digitally sign all of your JAR files and redeploy your application
Question: 39 CertyIQ
You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing
a message were sent by a specific user.
What should you do?
A. Tag messages client side with the originating user identifier and the destination user.
B. Encrypt the message client side using block-based encryption with a shared key.
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private
key.
D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
Answer: C
Explanation:
I am not sure about this one. D works if SSL client authentication is enabled.
C works as well if client encrypts message with private key and server decrypt with public key.
I prefer C.
Question: 40 CertyIQ
As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL
database from their private data center to their
GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of
packet loss that is disrupting the replication.
What should they do?
Answer: B
Explanation:
It's latency issues. That won't be solved by adding another VPN tunnel. If it was just a throughput issue then
VPN would do, however to improve latency you need to go layer 2. Answer is B
Question: 41 CertyIQ
Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis.
What is the recommended approach for sanitizing this data of personally identifiable information or payment card
information before initial storage?
Answer: C
Explanation:
Reference:
https://cloud.google.com/solutions/pci-dss-compliance-in-gcp#using_data_loss_prevention_api_to_sanitize_
data
Question: 42 CertyIQ
You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file
so it is in the default execution path and persists across sessions?
A. ~/bin
B. Cloud Storage
C. /google/scripts
D. /usr/local/bin
Answer: A
Explanation:
https://cloud.google.com/shell/docs/how-cloud-shell-works
Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOME directory on the virtual
machine instance. This storage is on a per-user basis and is available across projects. Unlike the instance
itself, this storage does not time out on inactivity. All files you store in your home directory, including installed
software, scripts and user configuration files like .bashrc and .vimrc, persist between sessions. Your $HOME
directory is private to you and cannot be accessed by other users.
Question: 43 CertyIQ
You want to create a private connection between your instances on Compute Engine and your on-premises data
center. You require a connection of at least 20
Gbps. You want to follow Google-recommended practices. How should you set up the connection?
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN.
C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using
Dedicated Interconnect.
D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a
single Cloud VPN.
Answer: A
Explanation:
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
Question: 44 CertyIQ
You are analyzing and defining business processes to support your startup's trial usage of GCP, and you don't yet
know what consumer demand for your product will be. Your manager requires you to minimize GCP service costs
and adhere to Google best practices. What should you do?
A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management.
B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.
C. Utilize free tier and committed use discounts. Provision a staff position for service cost management.
D. Utilize free tier and committed use discounts. Provide training to the team about service cost management.
Answer: B
Explanation:
Answer B
Sustained use discounts are applied on incremental use after you reach certain usage thresholds. This means
that you pay only for the number of minutes that you use an instance, and Compute Engine automatically
gives you the best price. There's no reason to run an instance for longer than you need it.
- https://cloud.google.com/compute/docs/sustained-use-discounts
Committed use discounts are ideal for workloads with predictable resource needs. When you purchase a
committed use contract, you purchase compute resource (vCPUs, memory, GPUs, and local SSDs) at a
discounted price in return for committing to paying for those resources for 1 year or 3 years. The discount is
up to 57% for most resources like machine types or GPUs. The discount is up to 70% for memory-optimized
machine types. For committed use prices for different machine types, see VM instances pricing.
- https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Question: 45 CertyIQ
You are building a continuous deployment pipeline for a project stored in a Git source repository and want to
ensure that code changes can be verified before deploying to production. What should you do?
A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can
easily be rolled back.
B. Use Spinnaker to deploy builds to production and run tests on production deployments.
C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for
10% of users before doing a complete rollout.
D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After
testing, tag the repository for production and deploy that to the production environment.
Answer: D
Explanation:
the best answer is D, because the tagging is a best practice that is recommended on Jenkins/Spinnaker to
deploy the right code and prevent accidentally (or intentionally) push of wrong code to production
environments.
Reference:
https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/README.md
Question: 46 CertyIQ
You have an outage in your Compute Engine managed instance group: all instances keep restarting after 5
seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert,
offered to look into the issue. You need to make sure that he can access the VMs. What should you do?
Answer: C
Explanation:
C, is the correct answer. As per the requirement linux expert would need access to VM to troubleshoot the
issue. With health check enabled, old VM will be terminated as soon as health-check fails for the VM and new
VM will be auto-created. So, this situation will prevent linux expert to troubleshoot the issue. Had it been the
case that stack-drover logging is enabled and the expert just want to view the logs from the Cloud-logs than
role to project-viewer could help. But it is specifically mentioned that expert will login into VM to troubleshoot
the issue and not looking at the cloud Logs. So, Option-C is the correct answer.
Question: 47 CertyIQ
Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to
integrate Google Kubernetes Engine (GKE) for workload orchestration. Parts of your architecture must also be PCI
DSS-compliant. Which of the following is most accurate?
A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
B. GKE cannot be used under PCI DSS because it is considered shared hosting.
C. GKE and GCP provide the tools you need to build a PCI DSS-compliant environment.
D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
Answer: C
Explanation:
C: GKE & Compute Engine is PCI DSS compliant while Cloud Function, App Engine are not PC compliant
Question: 48 CertyIQ
Your company has multiple on-premises systems that serve as sources for reporting. The data has not been
maintained well and has become degraded over time.
You want to use Google-recommended practices to detect anomalies in your company data. What should you do?
A. Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.
B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.
C. Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.
D. Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.
Answer: B
Explanation:
Answer is B:
A & C - incorrect; Datalab does not provide anomaly detection OOTB. It is used more for data science
scenarios like interactive data analysis and build ML models.
B - CORRECT; DataPrep OOTB provides for fast exploration and anomaly detection and lists cloud storage as
an ingestion medium. Refer to ELT pipeline architecture here = https://cloud.google.com/dataprep
D - incorrect; At this time DataPrep cannot connect to SaaS or on-premise source. Not to be confused for
DataFlow which can!
Question: 49 CertyIQ
Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When
Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at
a particular node of the hierarchy?
A. The effective policy is determined only by the policy set at the node
B. The effective policy is the policy set at the node and restricted by the policies of its ancestors
C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors
D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors
Answer: C
Explanation:
C Google Cloud resources are organized hierarchically, where the organization node is the root node in the
hierarchy, the projects are the children of the organization, and the other resources are descendants of
projects. You can set Identity and Access Management (IAM) policies at different levels of the resource
hierarchy. Resources inherit the policies of the parent resource. The effective policy for a resource is the union
of the policy set at that resource and the policy inherited from its parent.
Reference:
https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy
Question: 50 CertyIQ
You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain
a connection between your on-premises systems and Google Cloud until the migration is completed. You want to
make sure all your on-premise systems remain reachable during this period. How should you organize your
networking in Google Cloud?
Answer: C
Explanation:
Ans is C,
https://cloud.google.com/vpc/docs/using-vpc
"Primary and secondary ranges can't conflict with on-premises IP ranges if you have connected your VPC
network to another network with Cloud VPN, Dedicated Interconnect, or Partner Interconnect."
Question: 51 CertyIQ
You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have
created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. What
should you do?
Answer: A
Explanation:
Question: 52 CertyIQ
You have an application that will run on Compute Engine. You need to design an architecture that takes into
account a disaster recovery plan that requires your application to fail over to another region in case of a regional
outage. What should you do?
A. Deploy the application on two Compute Engine instances in the same project but in a different region. Use
the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in
case of a disaster.
B. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP
load balancing service to fail over to an instance on your premises in case of a disaster.
C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different
region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the
standby instance group in case of a disaster.
D. Deploy the application on two Compute Engine instance groups, each in a separate project and a different
region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the
standby instance group in case of a disaster.
Answer: C
Explanation:
C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different
region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the
standby instance group in case of a disaster.
Question: 53 CertyIQ
You are deploying an application on App Engine that needs to integrate with an on-premises database. For security
purposes, your on-premises database must not be accessible through the public internet. What should you do?
A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit
access to the open on-premises database.
B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-
premises database.
C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access
to the on-premises database.
D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-
premises database.
Answer: D
Explanation:
Agree with D - "When to choose the flexible environment" "Accesses the resources or services of your Google
Cloud project that reside in the Compute Engine network."
https://cloud.google.com/appengine/docs/the-appengine-environments
Question: 54 CertyIQ
You are working in a highly secured environment where public Internet access from the Compute Engine VMs is
not allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install
specific software on a Compute Engine instance. How should you install the software?
A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google
Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using
gsutil.
B. Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP
address range for Cloud Storage. Download the files to the VM using gsutil.
C. Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a
Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to
the VM using gcloud.
D. Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic
except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.
Answer: A
Explanation:
A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google
Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using
gsutil.
Question: 55 CertyIQ
Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-
recommended practices. What should you do?
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into
Cloud Storage.
B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.
C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud
Storage.
D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
Answer: A
Explanation:
A' Transfer Appliance lets you quickly and securely transfer large amounts of data to Google Cloud Platform
via a high capacity storage server that you lease from Google and ship to our datacenter. Transfer Appliance
is recommended for data that exceeds 20 TB or would take more than a week to upload.
Question: 56 CertyIQ
You have an application deployed on Google Kubernetes Engine using a Deployment named echo-deployment. The
deployment is exposed using a Service called echo-service. You need to perform an update to the application with
minimal downtime to the application. What should you do?
Answer: A
Explanation:
Question: 57 CertyIQ
Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud
projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs
are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.
How should you configure users' access roles?
A. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery
dataViewer on the projects that contain the data.
B. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and
BigQuery user on the projects that contain the data.
C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery
dataViewer on the projects that contain the data.
D. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and
BigQuery jobUser on the projects that contain the data.
Answer: C
Explanation:
A is wrong because bq User Permission will allow you to edit the dataset, which is something that we don't
want in this scenario.
B and D is wrong because "You want to make sure that no query costs are incurred on the projects that
contain the data" so you don't want users to fire quires on the Project that contains the dataset , hence the
"dataViewer" permission
https://cloud.google.com/bigquery/docs/access-control
Question: 58 CertyIQ
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded
images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all
users have a Google Account. How should you have users upload images?
A. Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24
hours.
B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
C. Create an App Engine web application where users can upload images. Configure App Engine to disable the
application after 24 hours. Authenticate users via Cloud Identity.
D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate
users via Cloud Identity.
Answer: B
Explanation:
Ans B "When should you use a signed URL? In some scenarios, you might not want to require your users to
have a Google account in order to access Cloud Storage" "Signed URLs contain authentication information in
their query string, allowing users without credentials to perform specific actions on a resource"
https://cloud.google.com/storage/docs/access-control/signed-urls
Question: 59 CertyIQ
Your web application must comply with the requirements of the European Union's General Data Protection
Regulation (GDPR). You are responsible for the technical architecture of your web application. What should you
do?
A. Ensure that your web application only uses native features and services of Google Cloud Platform, because
Google already has various certifications and provides pass-on compliance when you use native features.
B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within
your application.
C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance
gaps.
D. Define a design for the security of data in your web application that meets GDPR requirements.
Answer: D
Explanation:
The GDPR lays out specific requirements for businesses and organizations who are established in Europe or
who serve users in Europe. It:
Regulates how businesses can collect, use, and store personal data
Reference:
https://www.mobiloud.com/blog/gdpr-compliant-mobile-app/
Question: 60 CertyIQ
You need to set up Microsoft SQL Server on GCP. Management requires that there's no downtime in case of a data
center outage in any of the zones within a
GCP region. What should you do?
Answer: A
Explanation:
A seems correct.
"... high availability (HA) configuration for Cloud SQL instances... A Cloud SQL instance configured for HA is
also called a regional instance and is located in a primary and secondary zone within the configured region.
In the event of an instance or zone failure, this configuration reduces downtime, and your data continues to be
available to client applications."
Question: 61 CertyIQ
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and
need to deploy the application. What should you do?
A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.
Answer: B
Explanation:
Deployment Manager is used to automate the process of provisioning infrastructure. Therefore, gcloud and
Deployment Manager do the same thing. Meanwhile, kubectl is used to run commands against an already
created cluster.
Question: 62 CertyIQ
You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a
skills gap plan which incorporates the business goal of cost optimization. Your team has deployed two GCP
projects successfully to date. What should you do?
A. Allocate budget for team training. Set a deadline for the new GCP project.
B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification
based on job role.
C. Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
D. Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google
Cloud certification based on job role.
Answer: B
Explanation:
B...- Allocate budget for team training. - Create a roadmap for your team - Achieve Google Cloud certification
based on "job role".
Question: 63 CertyIQ
You are designing an application for use only during business hours. For the minimum viable product release, you'd
like to use a managed product that automatically `scales to zero` so you don't incur costs when there is no activity.
Which primary compute resource should you choose?
A. Cloud Functions
B. Compute Engine
C. Google Kubernetes Engine
D. AppEngine flexible environment
Answer: A
Explanation:
C. Google Kubernetes Engine - not a managed service and wont scale down to 0
Question: 64 CertyIQ
You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve
several root entities for which you have the identifiers. You want to minimize the overhead in operations performed
by Cloud Datastore. What should you do?
A. Create the Key object for each Entity and run a batch get operation
B. Create the Key object for each Entity and run multiple get operations, one operation for each entity
C. Use the identifiers to create a query filter and run a batch query operation
D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
Answer: A
Explanation:
Correct Answer: A
Create the Key object for each Entity and run a batch get operation
https://cloud.google.com/datastore/docs/best-practices
Use batch operations for your reads, writes, and deletes instead of single operations. Batch operations are
more efficient because they perform multiple operations with the same overhead as a single operation.
Firestore in Datastore mode supports batch versions of the operations which allow it to operate on multiple
objects in a single Datastore mode call.
Such batch calls are faster than making separate calls for each individual entity because they incur the
overhead for only one service call. If multiple entity groups are involved, the work for all the groups is
performed in parallel on the server side.
Question: 65 CertyIQ
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted
on Cloud Storage using customer-supplied encryption keys. What should you do?
A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
B. Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
C. Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
D. Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to
upload the files to that bucket.
Answer: A
Explanation:
A is correct. use gsutil to upload file in Cloud Storage. And Cloud Storage configuration is defined in .boto on
client side.
Question: 66 CertyIQ
Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their
game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture
the KPIs?
A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver
Monitoring Console to view them.
C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and
visualize the results in Google Data Studio.
D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud
Datalab.
Answer: B
Explanation:
Reference:
https://cloud.google.com/solutions/data-lifecycle-cloud-platform
Question: 67 CertyIQ
You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to
operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new
versions of the application. Which set of steps should you take?
A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup
script to clone the repository, check out the production branch, install the dependencies, and start the Python
app. 3. Restart the instances to automatically deploy new production releases.
B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a
Compute Engine image from the production branch that contains all of the dependencies and automatically
starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new
production releases.
C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines.
2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version
number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to 'IfNotPresent' in the staging
namespace, and then promote it to the production namespace after testing.
D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image
from the master branch with all of the dependencies, and tag it with 'latest'. 3. Create a Kubernetes
Deployment in the default namespace with the imagePullPolicy set to 'Always'. Restart the pods to
automatically deploy new production releases.
Answer: C
Explanation:
C is the answer, B is a big of a machine for just .1cpu, if we need t run many versions of this it will be a waste of
resources. now A is a fit however o much of work to get the image and deploy the machine and scaling time as
well. I see that C is a better fit.
Two key aspects: maximize machine utilization and reliably deploy new versions of the application.Thinking
about the option A from the perspective of the above requirement: For even a single line of code change you
will have to:1. Restart the machine(s)2. Spend machine cycles for boot3. Run the startup script to clone the
repository and check out the production branch4. Install the dependencies (even if the dependencies are
already installed even then the code would atleast execute and check if they are there or not)5. Start the
Python app.Where as in K8 environment either some of the above activities are either not required or
performed efficiently. Due to layer caching in docker image building process, only the changed code is built,
pulled and deployed without the need to touch other services/dependencies (note the key word
imagePullPolicy set to 'IfNotPresent')
Question: 68 CertyIQ
Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory
domain controller for identity management.
What should you do?
A. Use the Admin Directory API to authenticate against the Active Directory domain controller.
B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and
configure SAML SSO.
C. Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an
identity provider.
D. Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises
AD domain controller using Google Cloud Directory Sync.
Answer: B
Explanation:
It’s simple. Domain controllers are not meant authenticate saas or web applications. This includes iam. Domain
controllers speak ntlm and Kerberos.
This why we use federation. Because web apps do not speak Kerberos or ntlm. They speak languages such
oauth. Hence the need for ad federation proxy B is correct
Question: 69 CertyIQ
You are running a cluster on Kubernetes Engine (GKE) to serve a web application. Users are reporting that a
specific part of the application is not responding anymore. You notice that all pods of your deployment keep
restarting after 2 seconds. The application writes logs to standard output. You want to inspect the logs to find the
cause of the issue. Which approach can you take?
A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the
application.
C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the
logs.
D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
Answer: B
Explanation:
B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the
application.
Question: 70 CertyIQ
You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce
high availability. What should you do?
Answer: D
Explanation:
Question: 71 CertyIQ
Your company is running a stateless application on a Compute Engine instance. The application is used heavily
during regular business hours and lightly outside of business hours. Users are reporting that the application is slow
during peak hours. You need to optimize the application's performance. What should you do?
A. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled
managed instance group from the instance template.
B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled
managed instance group from the custom image.
C. Create a custom image from the existing disk. Create an instance template from the custom image. Create
an autoscaled managed instance group from the instance template.
D. Create an instance template from the existing disk. Create a custom image from the instance template.
Create an autoscaled managed instance group from the custom image.
Answer: C
Explanation:
The easiest way would be to create template from --source-instance, and then create MIG, but it is not listed
here, also you cannot create a MIG from image directly, you need a template, so answer is C (image ->
template -> mig).
Question: 72 CertyIQ
Your web application has several VM instances running within a VPC. You want to restrict communications
between instances to only the paths and ports you authorize, but you don't want to rely on static IP addresses or
subnets because the app can autoscale. How should you restrict communications?
A. Use separate VPCs to restrict traffic
B. Use firewall rules based on network tags attached to the compute instances
C. Use Cloud DNS and only allow connections from authorized hostnames
D. Use service accounts and configure the web application to authorize particular service accounts to have
access
Answer: B
Explanation:
B. Use firewall rules based on network tags attached to the compute instances
Question: 73 CertyIQ
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage
increases and ensure that you don't run out of storage, maintain 75% CPU usage cores, and keep replication lag
below 60 seconds. What are the correct steps to meet your requirements?
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds
75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and
shard the database to reduce replication time.
B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type
to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to
reduce load on the master.
C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance
to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core
machine type to reduce replication lag.
D. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance
to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication
lag, and change the instance type to a 32-core machine type to reduce replication lag.
Answer: A
Explanation:
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage
exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication
lag, and shard the database to reduce replication time.
Question: 74 CertyIQ
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This
requires a relational database that can operate on hundreds of terabytes of data. What is the Google-
recommended tool for such applications?
Answer: D
Explanation:
The keyword in this context is OLAP. CloudSQL is Relational SQL for OLTP. Capacity wise, BQ supports for
PB+ while CloudSQL only have max capacity of up to ~10TB. Again the questions specifically mention
"hundreds of TB of data". So D is the answer.
Reference:
https://cloud.google.com/files/BigQueryTechnicalWP.pdf
Question: 75 CertyIQ
You have deployed an application to Google Kubernetes Engine (GKE), and are using the Cloud SQL proxy
container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that
the application is reporting database connection issues. Your company policies require a post- mortem. What
should you do?
Answer: C
Explanation:
Question: 76 CertyIQ
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for
processing and storage. What is the Google- recommended way for your application to authenticate to the
required Google Cloud services?
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant
the appropriate Cloud Pub/Sub IAM roles.
C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for
access from each VM.
D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account
the appropriate Cloud Pub/Sub IAM roles.
Answer: A
Explanation:
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
Question: 77 CertyIQ
You want to establish a Compute Engine application in a single VPC across two regions. The application must
communicate over VPN to an on-premises network.
How should you deploy the VPN?
A. Use VPC Network Peering between the VPC and the on-premises network.
B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-
premises peer gateway.
Answer: D
Explanation:
It can't be -A - VPC Network Peering only allows private RFC 1918 connectivity across two Virtual Private
Cloud (VPC) networks. In this example is one VPC with on-premise network
https://cloud.google.com/vpc/docs/vpc-peering
It is not C - Because Cloud VPN gateways and tunnels are regional objects, not global
https://cloud.google.com/vpn/docs/how-to/creating-static-vpns
Question: 78 CertyIQ
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any
logs older than 45 days should be removed.
You want to optimize storage and follow Google-recommended practices. What should you do?
Answer: B
Explanation:
B. Make the tables time-partitioned, and configure the partition expiration at 45 days
Question: 79 CertyIQ
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPU load.
What should you do?
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP
Console.
B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance
group for the cluster using the gcloud command.
C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler
using the gcloud command.
D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the
cluster managed instance group from the GCP Console.
Answer: A
Explanation:
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP
Console.
Question: 80 CertyIQ
You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your
production environment is hosted on-premises. You need to establish a secure, redundant connection between
your on-premises network and the GCP network.
What should you do?
A. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a
secure connection between your networks if Dedicated Interconnect fails.
B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure
connection between your networks if Dedicated Interconnect fails.
C. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a
secure connection between your networks if the Transfer Appliance fails.
D. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure
connection between your networks if the Transfer Appliance fails.
Answer: B
Explanation:
B. Cloud VPN provides secure IPSec connection, though Direct Peering doesn’t. Also, check selection diagram
“What GCP connection is right for you?” on Hybrid Connectivity page. https://cloud.google.com/hybrid-
connectivity/
It explicitly points that Cloud VPN and Dedicated Interconnect are for extension of you Data Center to Cloud
(== of private compute resources). And Direct Peering for accessing GSuite (full set of GCP resources).
This Disaster Recovery scenario is described here, in section “Transferring data to and from GCP”:
https://cloud.google.com/architecture/dr-scenarios-building-blocks#transferring_data_to_and_from
Question: 81 CertyIQ
Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not
time-critical. You also need to use GCP services that are HIPAA-certified and manage service costs.
How should you design to meet Google best practices?
A. Provision preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-
compliant.
B. Provision preemptible VMs to reduce cost. Disable and then discontinue use of all GCP services and APIs that
are not HIPAA-compliant.
C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that
are not HIPAA-compliant.
D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP
services and APIs that are not HIPAA-compliant.
Answer: B
Explanation:
Disabling and then discontinuing allows you to see the effects of not using the APIs, so you can gauge (check)
alternatives. So that leaves B and D as viable answers. The question says only some are not time-critical which
implies others are... this means preemptible VMs are good because they will secure a spot for scaling when
needed. So I'm also going to choose B.
Question: 82 CertyIQ
Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed
instance group serving a public REST API that reads from and writes to a Cloud SQL instance.
What should you do?
A. Engage with a security company to run web scrapers that look your for users' authentication data om
malicious websites and notify you if any is found.
B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access.
C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your
application behaves.
D. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually
trigger a failover while monitoring KPIs for our REST API.
Answer: C
Explanation:
C: A well-designed application should scale seamlessly as demand increases and decreases, and be resilient
enough to withstand the loss of one or more compute resources.
A highly-available, or resilient, application is one that continues to function despite expected or unexpected
failures of components in the system. If a single instance fails or an entire zone experiences a problem, a
resilient application remains fault tolerant—continuing to function and repairing itself automatically if
necessary. Because stateful information isn’t stored on any single instance, the loss of an instance—or even
an entire zone—should not impact the application’s performance.
Question: 83 CertyIQ
Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in
the last month. What should you do?
A. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of
queries per user.
B. In the BigQuery interface, execute a query on the JOBS table to get the required information.
C. Use 'bq show' to list all jobs. Per job, use 'bq ls' to list job information and get the required information.
D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the
required information.
Answer: D
Explanation:
D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the
required information. https://cloud.google.com/bigquery/docs/reference/auditlogs#ids
Question: 84 CertyIQ
You want to automate the creation of a managed instance group. The VMs have many OS package dependencies.
You want to minimize the startup time for new
VMs in the instance group.
What should you do?
A. Use Terraform to create the managed instance group and a startup script to install the OS package
dependencies.
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the
managed instance group with the VM image.
C. Use Puppet to create the managed instance group and install the OS package dependencies.
D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package
dependencies.
Answer: B
Explanation:
Question: 85 CertyIQ
Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its
own dataset. Each dataset has multiple tables.
You want analysts from each country to be able to see and query only the data for their respective countries.
How should you configure the access rights?
A. Create a group per country. Add analysts to their respective country-groups. Create a single group
'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery
jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
B. Create a group per country. Add analysts to their respective country-groups. Create a single group
'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery
jobUser. Share the appropriate tables with view access with each respective analyst country-group.
C. Create a group per country. Add analysts to their respective country-groups. Create a single group
'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery
dataViewer. Share the appropriate dataset with view access with each respective analyst country- group.
D. Create a group per country. Add analysts to their respective country-groups. Create a single group
'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery
dataViewer. Share the appropriate table with view access with each respective analyst country-group.
Answer: A
Explanation:
Answer is A, https://cloud.google.com/bigquery/docs/dataset-access-controls
For a user to be able to query the tables in a dataset, it is not sufficient for the user to have access to the
dataset. A user must also have permission to run a query job in a project. If you want to give a user permission
to run a query from your project, give the user the bigquery.jobs.create permission for the project. You can do
this by assigning the user the roles/bigquery.jobUser role for your project. For more information, see Access
control example
Question: 86 CertyIQ
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their
current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to
keep up with the variety of workloads that are identified as follows: 20 TB of log archives retained for legal
reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer
session state data that allows customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?
A. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails,
and VM boot/data volumes.
B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud
Storage for log archives, thumbnails, and VM boot/data volumes.
C. Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for
VM boot/data volumes. Cloud Storage for log archives and thumbnails.
D. Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-
backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.
Answer: B
Explanation:
1. There are two issues with this question: 1. Assuming that you only consider migration and not configuration
then the answer is B. This is because the vm images will first be migrated to cloud storage only. VM images
can be migrated to cloud storage first and then imported on to the compute engine and saved on persistent
disks (https://cloud.google.com/compute/docs/import/import-existing-image). 2. D can only be correct if by
"local SSD's for VM images" they mean persistent local SSD disks
(https://cloud.google.com/compute/docs/disks/local-ssd) which may not be the case as the terminology is
clear (?).In my opinion B is the better answer.
2. i think should be as local SSD not recommended for GCS VM boot volume
Question: 87 CertyIQ
Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a
consistent set of hostnames even after pod scaling and relaunches.
Which feature of Kubernetes should you use to accomplish this?
A. StatefulSets
B. Role-based access control
C. Container environment variables
D. Persistent Volumes
Answer: A
Explanation:
StatefulSets is a feature of Kubernetes, which the question asks about. Yes, Persistent volumes are required
by StatefulSets (https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). See the Google
documentations for mentioning of hostnames (https://cloud.google.com/kubernetes-
engine/docs/concepts/statefulset)... Answer A
Question: 88 CertyIQ
You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group.
You want to improve the cache hit ratio.
What should you do?
A. Customize the cache keys to omit the protocol from the key.
B. Shorten the expiration time of the cached objects.
C. Make sure the HTTP(S) header Cache-Region points to the closest region of your users.
D. Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that
bucket.
Answer: A
Explanation:
"A logo needs to be cached whether displayed through HTTP or HTTPS. When you customize the cache keys
for the backend service that holds the logo, clear the Protocol checkbox so that requests through HTTP and
HTTPS count as matches for the logo's cache entry."
Reference:
https://cloud.google.com/cdn/docs/best-practices#using_custom_cache_keys_to_improve_cache_hit_ratio
Question: 89 CertyIQ
Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.
How should you collect these logs from both VMs and services?
Answer: B
Explanation:
B is correct answer The Logging agent streams logs from your VM instances and from selected third-party
software packages to Cloud Logging. It is a best practice to run the Logging agent on all your VM instances.
Reference:
https://cloud.google.com/logging/docs/agent/installation
Question: 90 CertyIQ
You have an App Engine application that needs to be updated. You want to test the update with production traffic
before replacing the current application version.
What should you do?
A. Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary
testing.
B. Deploy the update as a new version in the App Engine application, and split traffic between the new and
current versions.
C. Deploy the update in a new VPC, and use Google's global HTTP load balancing to split traffic between the
update and current applications.
D. Deploy the update as a new App Engine application, and use Google's global HTTP load balancing to split
traffic between the new and current applications.
Answer: B
Explanation:
B – Deploy the update as a new version in AppEngine app, and split traffic between the new and current
versions.
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
Question: 91 CertyIQ
All Compute Engine instances in your VPC should be able to connect to an Active Directory server on specific
ports. Any other traffic emerging from your instances is not allowed. You want to enforce this using VPC firewall
rules.
How should you configure the firewall rules?
A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with
priority 100 to allow the Active Directory traffic for all instances.
B. Create an egress rule with priority 100 to deny all traffic for all instances. Create another egress rule with
priority 1000 to allow the Active Directory traffic for all instances.
C. Create an egress rule with priority 1000 to allow the Active Directory traffic. Rely on the implied deny egress
rule with priority 100 to block all traffic for all instances.
D. Create an egress rule with priority 100 to allow the Active Directory traffic. Rely on the implied deny egress
rule with priority 1000 to block all traffic for all instances.
Answer: A
Explanation:
A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with
priority 100 to allow the Active Directory traffic for all instances.
Question: 92 CertyIQ
Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The
company has begun experimenting with a machine learning model on Google Cloud Platform to improve the
quality of results.
What should the customer do to improve their model's results over time?
A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to
analyze the efficiency of the model.
B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer
better results.
C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model
to them as soon as they are available for additional performance.
D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training
data.
Answer: D
Explanation:
Model performance is generally based on the volume of its training data input. The more the data, the better
the model.
Question: 93 CertyIQ
A development team at your company has created a dockerized HTTPS web application. You need to deploy the
application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically.
How should you deploy to GKE?
A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance
the HTTPS traffic.
B. Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service
resource of type LoadBalancer to load-balance the HTTPS traffic.
C. Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the
HTTPS traffic.
D. Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to
load-balance the HTTPS traffic.
Answer: A
Explanation:
Ingress is preferred to LB
Ingress is a Kubernetes resource that encapsulates a collection of rules and configurations for routing
external HTTP(S) traffic to internal services.On GKE, Ingress is implemented using Cloud Load Balancing.
When you create an Ingress in your cluster, GKE creates an HTTP(S) load balancer and configures it to route
traffic to your application."
Question: 94 CertyIQ
You need to design a solution for global load balancing based on the URL path being requested. You need to
ensure operations reliability and end-to-end in- transit encryption based on Google best practices.
What should you do?
Answer: B
Explanation:
Reference:
https://cloud.google.com/load-balancing/docs/https/url-map
Question: 95 CertyIQ
You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with HTTP
status codes of 5xx and 429.
How should you handle these types of errors?
Answer: B
Explanation:
Answer is B
You should use exponential backoff to retry your requests when receiving errors with 5xx or 429 response
codes from Cloud Storage.
Reference:
https://cloud.google.com/storage/docs/json_api/v1/status-codes
Question: 96 CertyIQ
You need to develop procedures to test a disaster plan for a mission-critical application. You want to use Google-
recommended practices and native capabilities within GCP.
What should you do?
A. Use Deployment Manager to automate service provisioning. Use Activity Logs to monitor and debug your
tests.
B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests.
C. Use gcloud scripts to automate service provisioning. Use Activity Logs to monitor and debug your tests.
D. Use gcloud scripts to automate service provisioning. Use Stackdriver to monitor and debug your tests.
Answer: B
Explanation:
Deployment Manager + Cloud Monitoring and Logging solution.
Question: 97 CertyIQ
Your company creates rendering software which users can download from the company website. Your company
has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-
recommended practices.
How should you store the files?
Answer: D
Explanation:
Question: 98 CertyIQ
Your company acquired a healthcare startup and must retain its customers' medical information for up to 4 more
years, depending on when it was created. Your corporate policy is to securely retain this data, and then delete it as
soon as regulations allow.
Which approach should you take?
A. Store the data in Google Drive and manually delete records as they expire.
B. Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely.
C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire.
D. Store the data in Cloud Storage and run a nightly batch script that deletes all expired data.
Answer: C
Explanation:
Answer is C. Noteworthy if you are moving PHI from an on-prem source into cloud storage bucket, the object
creation date recorded is the current date and not the original creation date as seen in on-prem source. To
port original creation date you could script a function to write to the object metadata field called "Custom
time" which is referenced in object lifecycle rules.
So to delete objects up to 4 years, you add an object lifecycle rule specifying the following form parameters:
Object conditions = select ""Days since custom time" checkbox and specify 1460 days.
Question: 99 CertyIQ
You are deploying a PHP App Engine Standard service with Cloud SQL as the backend. You want to minimize the
number of queries to the database.
What should you do?
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database
values from memcache before issuing a query to Cloud SQL.
B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the
cache with keys containing query results.
C. Set the memcache service level to shared. Create a cron task that runs every minute to save all expected
queries to a key called cached_queries.
D. Set the memcache service level to shared. Create a key called cached_queries, and return database values
from the key before using a query to Cloud SQL.
Answer: A
Explanation:
Dedicated and shared will resolve the problem, the key is: store all queries in only one key "cached_queries" is
not good, we have limits: https://cloud.google.com/appengine/docs/standard/python/memcache
A. Using the Cron service provided by App Engine, publish messages directly to a message-processing utility
service running on Compute Engine instances.
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to
that topic using a message-processing utility service running on Compute Engine instances.
C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a
message-processing utility service running on Compute Engine instances.
D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic
using a message-processing utility service running on Compute Engine instances.
Answer: B
Explanation:
B Cloud Scheduler provides a fully managed, enterprise-grade service that lets you schedule events. After
you have scheduled a job, Cloud Scheduler will call the configured event handlers, which can be App Engine
services, HTTP endpoints, or Pub/Sub subscriptions.
A. Compress and upload both archived files and files uploaded daily using the gsutil "m option.
B. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to
Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection
and use it to upload files daily.
C. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to
Cloud Storage. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compress and
upload files daily using the gsutil "m option.
D. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to
Cloud Storage. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and
upload files daily.
Answer: B
Explanation:
Agree B. 100Mbps connections for 10TB data transfer is takes too long
https://cloud.google.com/solutions/transferring-big-data-sets-to-gcp#close
Answer: B
Explanation:
B is the answer.https://cloud.google.com/blog/products/data-analytics/handling-duplicate-data-in-
streaming-pipeline-using-pubsub-dataflow
A. 1. Define a migration plan based on the list of the applications and their dependencies. 2. Migrate all virtual
machines into Compute Engine individually with Migrate for Compute Engine.
B. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Create images
of all disks. Import disks on Compute Engine. 3. Create standard virtual machines where the boot disks are the
ones you have imported.
C. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Define a
migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.
D. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Install a third-
party agent on all selected virtual machines. 3. Migrate all virtual machines into Compute Engine.
Answer: C
Explanation:
The framework illustrated in the preceding diagram has four phases:
¢ Assess. In this phase, you assess your source environment, assess the workloads that you want to migrate to
Google Cloud, and assess which VMs support each workload.
¢ Plan. In this phase, you create the basic infrastructure for Migrate for Compute Engine, such as provisioning
the resource hierarchy and setting up network access.
¢ Deploy. In this phase, you migrate the VMs from the source environment to Compute Engine.
¢ Optimize. In this phase, you begin to take advantage of the cloud technologies and capabilities.
Reference:
https://cloud.google.com/architecture/migrating-vms-migrate-for-compute-engine-getting-started
A. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use an HTTP load
balancer in front of the instances.
B. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use a network load
balancer in front of the instances.
C. Use an unmanaged instance group with an active and standby instance in different zones, use a regional
persistent disk, and use an HTTP load balancer in front of the instances.
D. Use an unmanaged instance group with an active and standby instance in different zones, use a regional
persistent disk, and use a network load balancer in front of the instances.
Answer: D
Explanation:
Since the Traffic is TCP, Ans A & C gets eliminated as HTTPS load balance is not supported.
B - File storage system is Cloud Firestore which do not give full control, hence eliminated.
D - Unmanaged instance group with network load balance with regional persistent disk for storage gives full
control which is required for the migration.
Reference:
https://cloud.google.com/compute/docs/instance-groups
A. Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
B. Configure a direct peering connection between the on-premises environment and Google Cloud.
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google
Cloud.
Answer: D
Explanation:
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google
Cloud.Interconnect gives very high speed connection and low latency. It gives 10Gbps and 100 Gbps
connection, which makes it very fast. In addition, Interconnect is an enterprise-grade connection to Google
Cloud as per documentation. On the other hand, Cloud VPN is recommended for lower throughput solution, or
if you are experimenting with migrating workloads to Google Cloud.
A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions.
B. Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancing instance in front of
both services.
C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the
development branch. As part of the Cloud Build trigger, configure the substitution variable
TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version.
D. In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of
the application on Cloud Run. Configure Traffic Director to send a small percentage of traffic to the new version
of the application.
Answer: A
Explanation:
A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions.
Traffic managementCloud Run for Anthos can now route each request or RPC randomly between multiple
revisions of a service with the traffic percentages you configure. You can use this feature to perform canary
deployments of a newer version of your application, sending a small percentage of the traffic and validating if
it is performing correctly, before gradually increasing the traffic.Similarly, these new traffic management
capabilities make it possible to roll back to an older version of your application quickly. You can manage
traffic to your service on the Cloud Console, as well as the gcloud command-line tool.
A. Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create
alert policies.
B. Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install
alerting software on a Compute Engine instance.
C. Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export
the data to BigQuery, and make a Data Studio dashboard.
D. Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and
create alert policies.
Answer: A
Explanation:
Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create
alert policies.
A. Sharding
B. Read replicas
C. Binary logging
D. Automated backups
E. Semisynchronous replication
Answer: CD
Explanation:
Backups help you restore lost data to your Cloud SQL instance. Additionally, if an instance is having a
problem, you can restore it to a previous state by using the backup to overwrite it. Enable automated backups
for any instance that contains necessary data. Backups protect your data from loss or damage.
Enabling automated backups, along with binary logging, is also required for some operations, such as clone
and replica creation.
Reference:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups
A. Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this
identifier.
B. When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify
any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request,
query Data Catalog to find the column with personal information.
C. Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that
affect the subject's data from this view. Use this view instead of the source table for all analysis tasks.
D. Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique
identifier with a salted SHA256 of its value.
Answer: A
Explanation:
Answer is (A), (B) is only masking the data without any deletion, and no need for that. Think it in KISS principle,
don't overwhelm simple things.
A. App Engine
B. GKE On-Prem
C. Compute Engine
D. Google Kubernetes Engine
Answer: A
Explanation:
A. App Engine
I chose A due to the following statement - You want to minimize the operational overhead of the solution
A. Create a shell script that uses the gcloud command to change the machine type of the development and
acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the
production instances to automate the task.
B. Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance
environments after office hours and start them just before office hours.
C. Deploy the development and acceptance applications on a managed instance group and enable autoscaling.
D. Use regular Compute Engine instances for the production environment, and use preemptible VMs for the
acceptance and development environments.
Answer: B
Explanation:
B is the answer.
https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing-your-cloud-costs
Schedule VMs to auto start and stop: The benefit of a platform like Compute Engine is that you only pay for
the compute resources that you use. Production systems tend to run 24/7; however, VMs in development, test
or personal environments tend to only be used during business hours, and turning them off can save you a lot
of money!
https://cloud.google.com/blog/products/storage-data-transfer/save-money-by-stopping-and-starting-
compute-engine-instances-on-schedule
Cloud Scheduler, GCP’s fully managed cron job scheduler, provides a straightforward solution for
automatically stopping and starting VMs. By employing Cloud Scheduler with Cloud Pub/Sub to trigger Cloud
Functions on schedule, you can stop and start groups of VMs identified with labels of your choice (created in
Compute Engine). Here you can see an example schedule that stops all VMs labeled "dev" at 5pm and restarts
them at 9am, while leaving VMs labeled "prod" untouched
Reference:
https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing-your-cloud-costs
A. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and
the on-premises MySQL server. 2. Stop the on-premises application. 3. Create a mysqldump of the on-premises
MySQL server. 4. Upload the dump to a Cloud Storage bucket. 5. Import the dump into Cloud SQL. 6. Modify the
source code of the application to write queries to both databases and read from its local database. 7. Start the
Compute Engine application. 8. Stop the on-premises application.
B. 1. Set up Cloud SQL proxy and MySQL proxy. 2. Create a mysqldump of the on-premises MySQL server. 3.
Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Stop the on-premises
application. 6. Start the Compute Engine application.
C. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and
the on-premises MySQL server. 2. Stop the on-premises application. 3. Start the Compute Engine application,
configured to read and write to the on-premises MySQL server. 4. Create the replication configuration in Cloud
SQL. 5. Configure the source database server to accept connections from the Cloud SQL replica. 6. Finalize the
Cloud SQL replica configuration. 7. When replication has been completed, stop the Compute Engine
application. 8. Promote the Cloud SQL replica to a standalone instance. 9. Restart the Compute Engine
application, configured to read and write to the Cloud SQL standalone instance.
D. 1. Stop the on-premises application. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the
dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Start the application on Compute
Engine.
Answer: C
Explanation:
1. C because it has minimal modification to the application or database. Also it's easier to fail back to the
original solution if the cloud implementation has issues (assuming that there will be a "post-go-live"
monitoring period).
A. Remove the default route on all VPCs. Move all approved instances into a new subnet that has a default
route to an internet gateway.
B. Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a default route
to the internet gateway on this new subnet.
C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
D. Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved
instances in the allowedValues list.
Answer: D
Explanation:
Reference:
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
Answer: B
Explanation:
when you create a firewall rule there is an option for firewall rule logging on/off. It is set to off by default.
To get firewall insights or view the logs for a specific firewall rule you need to enable logging while creating
the rule or you can enable it by editing that rule.
Reference:
https://cloud.google.com/network-intelligence-center/docs/firewall-insights/how-to/using-firewall-insights
A. 1. Create a VPC Service Controls perimeter that includes the projects with the buckets. 2. Create an access
level with the CIDR of the office network.
B. 1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range. 2. Use
the Classless Inter-domain Routing (CIDR) of the office network.
C. 1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add
IAM permissions to the buckets. 2. Schedule the Cloud Functions with Cloud Scheduler to add permissions at
the start of business and remove permissions at the end of business.
D. 1. Create a Cloud VPN to the office network. 2. Configure Private Google Access for on-premises hosts.
Answer: A
Explanation:
A. Best optionB. Not all instances need this restrictionC. You are not restricting remote access. The users can
still access remotely using their credentials during the business day. The ask is to restrict data retrieval from
outside the office network (what if they are working from home...?)D. VPN - too much overhead
Answer: D
Explanation:
D - By definition the MIG applies an opportunistic update only when you manually initiate the update on
selected instances or when new instances are created.
A. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs,
use the latest snapshot to restore the disk in the same zone.
B. Configure the Compute Engine instances with an instance template for the application, and use a regional
persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up
the application in another zone in the same region. Use the regional persistent disk for the application data.
C. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs,
use the latest snapshot to restore the disk in another zone within the same region.
D. Configure the Compute Engine instances with an instance template for the application, and use a regional
persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up
the application in another region. Use the regional persistent disk for the application data.
Answer: B
Explanation:
B. Configure the Compute Engine instances with an instance template for the application, and use a regional
persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up
the application in another zone in the same region. Use the regional persistent disk for the application data.
Answer is B
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new
IP addresses so there is no overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to
perform NAT on the overlapping IP space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a
custom route advertisement to block the overlapping IP space.
D. Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks
the overlapping IP space.
Answer: A
Explanation:
Answer: B
Explanation:
https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms#preemptible_and_non-
preemptible_secondary_workers
Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs.
How should you accomplish this?
Answer: B
Explanation:
Add two additional NICs to Instance #1 with the following configuration: ¢ NIC1 ‹—גVPC: VPC #2 ‹—ג
SUBNETWORK: subnet #2 ¢ NIC2 ‹—גVPC: VPC #3 ‹—גSUBNETWORK: subnet #3 Update firewall rules to
enable traffic between instances.
A. Create a Compute Engine instance template using the most recent Debian image. Create an instance from
this template, and install and configure the application as part of the startup script. Repeat this process
whenever a new Google-managed Debian image becomes available.
B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch
management to install available updates.
C. Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and
configure the application on the instance. Repeat this process whenever a new Google-managed Debian image
becomes available.
D. Create a Docker container with Debian as the base image. Install and configure the application as part of the
Docker image creation process. Host the container on Google Kubernetes Engine and restart the container
whenever a new update is available.
Answer: B
Explanation:
B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch
management to install available updates.
Reference:
https://cloud.google.com/compute/docs/os-patch-management
A. 1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to
investigate logs from affected Pods.
B. 1. Create a new GKE cluster with Cloud Operations for GKE enabled. 2. Migrate the affected Pods to the new
cluster, and redirect traffic for those Pods to the new cluster. 3. Use the GKE Monitoring dashboard to
investigate logs from affected Pods.
C. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus. 2. Set an alert to
trigger whenever the application returns an error.
D. 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus. 2. Migrate the
affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Set an alert to trigger
whenever the application returns an error.
Answer: A
Explanation:
A - this is a simple question, no need for Prometheus and no need to build another cluster....!
Use Promethues only in case of multi cloud solution hence AGoogle Cloud Managed Service for Prometheus is
Google Cloud's fully managed multi-cloud solution for Prometheus metrics. It lets you globally monitor and
alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at
scale.
Answer: C
Explanation:
FUSE can be used, but it comes with latency. Question states, huge workload like 100 MB/sec writes, then
FUSE is not a god choice. Filestore is much better solution.
A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices.
B. Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google
Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the
configurations of the filtered workloads.
C. Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace.
On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the
namespace. Inspect the configurations of the filtered workloads.
D. Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry
between the microservices in the Cloud Console.
Answer: A
Explanation:
Anthos Service Mesh’s robust tracing, monitoring, and logging features give you deep insights into how your
services are performing, how that performance affects other processes, and any issues that might exist.
A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.
B. Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer.
Use the service account to upload new files.
C. Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years.
D. Create the bucket with fine-grained access control, and grant a service account the role of Object Writer.
Use the service account to upload new files.
Answer: A
Explanation:
o If a bucket has a retention policy, objects in the bucket can only be deleted or replaced once their age is
greater than the retention period.
o Once you lock a retention policy, you cannot remove it or reduce the retention period it has.
Reference:
https://cloud.google.com/storage/docs/using-bucket-lock
A. Have each developer install a pre-commit hook on their workstation that tests the code and builds the
container when committing on the development branch. After a successful commit, have the developer deploy
the newly built container image on the development cluster.
B. Install a post-commit hook on the remote git repository that tests the code and builds the container when
code is pushed to the development branch. After a successful commit, have the developer deploy the newly
built container image on the development cluster.
C. Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and
stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new
image on the development cluster. Ensure only the deployment tool has access to deploy new versions.
D. Create a Cloud Build trigger based on the development branch to build a new container image and store it in
Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the
Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has
access to deploy new versions.
Answer: C
Explanation:
C. Create a Cloud Build trigger based on the development branch that tests the code, builds the container,
and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys
the new image on the development cluster. Ensure only the deployment tool has access to deploy new
versions
Explanation:
It all depends on how you want to troubleshoot the issue. Do you want to check the application before or after
increasing the max number of instances in the scaling group. I guess in real life people will ask for an increase
in the max number of instances and if the application process continues to consume all the CPU then they will
probably stop/restart the app. D is the only sensible option.A is not an optionB you could restart but you dont
know if that will fix the issueC SSH assumes unix vm's (?)....!
Answer: B
Explanation:
EVen though there is a max concurrency of 1000, there is no upper limit to the number of containers (unless
we specify). So, CloudRun will spawn as many containers as needed to run the number of requests and take
care of all the requests.
A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use
the Service DNS name to address it from other microservices within the cluster.
B. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use
the Ingress IP address to address the Deployment from other microservices within the cluster.
C. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS
name to address the microservice from other microservices within the cluster.
D. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP
address name to address the Pod from other microservices within the cluster.
Answer: A
Explanation:
1. Based on the description "You want to be able to configure each microservice with a specific number of
replicas.", It's a hint to use either Deployment or StatefulSet based on the service type is stateless or stateful,
since the option only has Deployment, thus Option C and D is out.
2. Based on the description "You also want to be able to address a specific microservice from any other
microservice in a uniform way, regardless of the number of replicas the microservice scales to." the later part
is the key point, which means the traffic direct to each service is based on some certain rules, in K8S this
means URL, which is Ingress with external HTTP LB.
A. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2.
Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3.
Use Cloud VPN to join the two VPCs.
B. 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the
networking team, and assign the Compute Admin role to the development team.
C. 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a
second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role
to the development team.
D. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2.
Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3.
Use VPC Peering to join the two VPCs.
Answer: B
Explanation:
For the same project , same VPC, Network Admin role to the networking team, and Compute Admin role to the
development team. What is the need for another project?
A. Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store the user
data in Cloud SQL.
B. Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal Google
Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner.
C. Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and save the
user data in Cloud SQL.
D. Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the
APIs and save the user data in Firestore.
Answer: D
Explanation:
D for the simple reason that there is low traffic with occasional spikes. Also, Cloud CDN usually caches static
content not store (wording). In addition, you can use LB's with Storage buckets. No need for multizones and
expensive spanner. Therefore the only remaining option is D.
A. Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs every minute for the relevant
events.
B. Export logs to BigQuery, and trigger a query in BigQuery to process the log data for the relevant events.
C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events.
D. Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events.
Answer: C
Explanation:
https://cloud.google.com/blog/products/management-tools/automate-your-response-to-a-cloud-logging-
event
A. Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the Cloud
NAT IP address to reach the instance.
B. Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the instance
group as a backend. Connect to the instance using the TCP Proxy IP.
C. Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured
Tunnel User. Use the gcloud command line tool to ssh into the instance.
D. Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion
host, SSH into the desired instance.
Answer: C
Explanation:
C --> https://cloud.google.com/solutions/connecting-securely#cloud_iap
It says "No instances can have public IP" not just the the instance we are trying to SSH. So D cannot be the
answer as Bastion Host, has a public IP.C is the only available option as we need to connect specifically to an
instance.
A. Assign the development team group the Project Viewer role on the Finance folder, and assign the
development team group the Project Owner role on the Shopping folder.
B. Assign the development team group only the Project Viewer role on the Finance folder.
C. Assign the development team group the Project Owner role on the Shopping folder, and remove the
development team group Project Owner role from the Organization.
D. Assign the development team group only the Project Owner role on the Shopping folder.
Answer: C
Explanation:
Reference:
https://cloud.google.com/resource-manager/docs/creating-managing-folders
A. Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice, configure a pod anti-
affinity label that has the name of the tainted node as a value.
B. Use Istio's fault injection on the particular microservice whose faulty behavior you want to simulate.
C. Destroy one of the nodes of the Kubernetes cluster to observe the behavior.
D. Configure Istio's traffic management features to steer the traffic away from a crashing microservice.
Answer: B
Explanation:
B is the answer
https://istio.io/latest/docs/tasks/traffic-management/fault-injection/
A. App Engine
B. Cloud Endpoints
C. Compute Engine
D. Google Kubernetes Engine
Answer: A
Explanation:
A, App Engine, you just want you people dedicated to the App
Reference:
https://cloud.google.com/terms/services
A. Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at
least one month before replacing the old API with the new API.
B. Create an automated process to generate API documentation, and update the public API documentation as
part of the CI/CD process when deploying an update to the API.
C. Use a versioning strategy for the APIs that increases the version number on every backward-incompatible
change.
D. Use a versioning strategy for the APIs that adds the suffix DEPRECATED to the current API version number
on every backward-incompatible change. Use the current version number for the new API.
Answer: C
Explanation:
All Google API interfaces must provide a major version number, which is encoded at the end of the protobuf
package, and included as the first part of the URI path for REST APIs. If an API introduces a breaking change,
such as removing or renaming a field, it must increment its API version number to ensure that existing user
code does not suddenly break.
A. The new approach will be significantly less costly, make it easier to manage the underlying infrastructure,
and automatically manage the CI/CD pipelines.
B. The monolithic solution can be converted to a container with Docker. The generated container can then be
deployed into a Kubernetes cluster.
C. The new approach will make it easier to decouple infrastructure from application, develop and release new
features, manage the underlying infrastructure, manage CI/CD pipelines and perform A/B testing, and scale the
solution if necessary.
D. The process can be automated with Migrate for Compute Engine.
Answer: C
Explanation:
C to decouple infrastructure from application, the development team want a fully managed service approach
A. Use a load testing tool to simulate the expected number of concurrent users and total requests to your
application, and inspect the results.
B. Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application
deployments. Send curl requests to your application, and validate if the auto scaling works.
C. Replicate the application over multiple GKE clusters in every Google Cloud region. Configure a global
HTTP(S) load balancer to expose the different clusters over a single global IP address.
D. Use Cloud Debugger in the development environment to understand the latency between the different
microservices.
Answer: A
Explanation:
1. A is the correct, no load ==> no latency checking
2. The question focuses more on the current infra and ensuring if the current setup will ensure a latency
target. An only a load test can do that. Autoscaling is no need of the hour and may require in the future and
that totally depends on the test results. It might be an overkill to have everything in advance even App is fine
with current configs. Hence A.
A. Use kubectl autoscale deployment APP_NAME --max 6 --min 2 --cpu-percent 50 to configure Kubernetes
autoscaling deployment.
B. Configure a Kubernetes autoscaling deployment based on the subscription/push_request_latencies metric.
C. Use the --enable-autoscaling flag when you create the Kubernetes cluster.
D. Configure a Kubernetes autoscaling deployment based on the subscription/num_undelivered_messages
metric.
Answer: D
Explanation:
A. Make sure a developer is tagging the code commit with the date and time of commit.
B. Make sure a developer is adding a comment to the commit that links to the deployment.
C. Make the container tag match the source code commit hash.
D. Make sure the developer is tagging the commits with latest.
Answer: C
Explanation:
C is correct "By design, the Git commit hash is immutable and references a specific version of your software."
as per https://cloud.google.com/architecture/best-practices-for-building-
containers#tagging_using_the_git_commit_hash
C is the answer.https://cloud.google.com/architecture/best-practices-for-building-
containers#tagging_using_the_git_commit_hashYou can use this commit hash as a version number for your
software, but also as a tag for the Docker image built from this specific version of your software. Doing so
makes Docker images traceable: because in this case the image tag is immutable, you instantly know which
specific version of your software is running inside a given container.
A. Develop the application with containers, and deploy to Google Kubernetes Engine.
B. Develop the application for App Engine standard environment.
C. Use a Managed Instance Group when deploying to Compute Engine.
D. Develop the application for App Engine flexible environment, using a custom runtime.
Answer: B
Explanation:
AppEngine Standard supports Go language now. Fully-managed service - So no operational overhead and
pay-only-for-what-you-use model.
Question: 143 CertyIQ
Your company is designing its data lake on Google Cloud and wants to develop different ingestion pipelines to
collect unstructured data from different sources.
After the data is stored in Google Cloud, it will be processed in several data pipelines to build a recommendation
engine for end users on the website. The structure of the data retrieved from the source systems can change at
any time. The data must be stored exactly as it was retrieved for reprocessing purposes in case the data structure
is incompatible with the current processing pipelines. You need to design an architecture to support the use case
after you retrieve the data. What should you do?
A. Send the data through the processing pipeline, and then store the processed data in a BigQuery table for
reprocessing.
B. Store the data in a BigQuery table. Design the processing pipelines to retrieve the data from the table.
C. Send the data through the processing pipeline, and then store the processed data in a Cloud Storage bucket
for reprocessing.
D. Store the data in a Cloud Storage bucket. Design the processing pipelines to retrieve the data from the
bucket.
Answer: D
Explanation:
The data needs to be stored as it is retrieved. This would mean that any processing should be done after it is
stored.
A. Grant all department members the required IAM permissions for their respective projects.
B. Create a Google Group per department and add all department members to their respective groups. Create a
folder per department and grant the respective group the required IAM permissions at the folder level. Add the
projects under the respective folders.
C. Create a folder per department and grant the respective members of the department the required IAM
permissions at the folder level. Structure all projects for each department under the respective folders.
D. Create a Google Group per department and add all department members to their respective groups. Grant
each group the required IAM permissions for their respective projects.
Answer: B
Explanation:
B. Create a Google Group per department and add all department members to their respective groups. Create
a folder per department and grant the respective group the required IAM permissions at the folder level. Add
the projects under the respective folders.
A. Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in
the given environment.
B. Implement a corporate policy to prevent teams from deploying Docker images to an environment unless the
Docker image was tested in an earlier environment.
C. Configure binary authorization policies for the development, staging, and production clusters. Create
attestations as part of the continuous integration pipeline.
D. Create a Kubernetes admissions controller to prevent the container from starting if it is not approved for
usage in the given environment.
Answer: C
Explanation:
C. Configure binary authorization policies for the development, staging, and production clusters. Create
attestations as part of the continuous integration pipeline.
A. Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
B. Use the Data Transfer appliance to perform an offline migration.
C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into
Cloud Storage.
D. Compress the data and upload it with gsutil -m to enable multi-threaded copy.
Answer: D
Explanation:
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#time
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-
datasets#options_available_from_googlehttps://cloud.google.com/architecture/migration-to-google-cloud-
transferring-your-large-datasets#gsutil_for_smaller_transfers_of_on-premises_datagsutil is especially
useful in the following scenarios:Your transfers need to be executed on an as-needed basis, or during
command-line sessions by your users.THIS IS OUT USECASE: --> You're transferring only a few files or very
large files, or both. because we are moving a dump of 10TB.
Answer: C
Explanation:
You want to maximize performance while minimizing downtime and data loss
A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using
the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data
and to store it in a new BigQuery dataset.
B. Generate a new key in Cloud KMS. Create a dataset in BigQuery using the customer-managed key option and
select the created key.
C. Import a key in Cloud KMS. Store all data in Cloud Storage using the customer-managed key option and
select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
D. Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select
the created key.
Answer: D
Explanation:
For those that saying BigQuery does not support CSEK, read the below. You will need to import you CSEK and
it will become CMEK. From there you can use it for BigQuery
https://cloud.google.com/bigquery/docs/customer-managed-encryption
A. Create a key with Cloud Key Management Service (KMS). Encrypt the data using the encrypt method of
Cloud KMS.
B. Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud
KMS key.
C. Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypted data to the bucket.
D. Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption
keys feature.
Answer: B
Explanation:
As per question: " your company must be able to rotate the encryption key"It is easily possible with KMS:
https://cloud.google.com/kms/docs/rotating-keys#kms-create-key-rotation-schedule-gcloud
For security reasons you want to create the key in GCP [KMS] then set the encryption at the bucket as the
data is inside the bucket.
A. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet.
B. Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private Cloud
(VPC).
C. Configure the GKE cluster as a route-based cluster. Configure Private Google Access on the Virtual Private
Cloud (VPC).
D. Create a Compute Engine instance, and install a NAT Proxy on the instance. Configure all workloads on GKE
to pass through this proxy to access third-party services on the Internet.
Answer: A
Explanation:
Answer: D
Explanation:
A. 1. Use Linux shasum to compute a digest of files you want to upload. 2. Use gsutil -m to upload all the files to
Cloud Storage. 3. Use gsutil cp to download the uploaded files. 4. Use Linux shasum to compute a digest of the
downloaded files. 5. Compare the hashes.
B. 1. Use gsutil -m to upload the files to Cloud Storage. 2. Develop a custom Java application that computes
CRC32C hashes. 3. Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded
files. 4. Compare the hashes.
C. 1. Use gsutil -m to upload all the files to Cloud Storage. 2. Use gsutil cp to download the uploaded files. 3.
Use Linux diff to compare the content of the files.
D. 1. Use gsutil -m to upload the files to Cloud Storage. 2. Use gsutil hash -c FILE_NAME to generate CRC32C
hashes of all on-premises files. 3. Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the
uploaded files. 4. Compare the hashes.
Answer: D
Explanation:
Calculate hashes on local files, which can be used to compare with gsutil ls -L output. If a specific hash option
is not provided, this command calculates all gsutil-supported hashes for the files.Note that gsutil
automatically performs hash validation when uploading or downloading files, so this command is only needed
if you want to write a script that separately checks the hash.If you calculate a CRC32c hash for files without a
precompiled crcmod installation, hashing will be very slow. See gsutil help crcmod for
details.https://cloud.google.com/storage/docs/gsutil/commands/hash
A. Install Anthos Service Mesh on your cluster. Use the Google Cloud Console to define a Service Level
Objective (SLO), and create an alerting policy based on this SLO.
B. Enable the Cloud Trace API on your project, and use Cloud Monitoring Alerts to send an alert based on the
Cloud Trace metrics.
C. Use Cloud Profiler to follow up the request latency. Create a custom metric in Cloud Monitoring based on the
results of Cloud Profiler, and create an Alerting policy in case this metric exceeds the threshold.
D. Configure Anthos Config Management on your cluster, and create a yaml file that defines the SLO and
alerting policy you want to deploy in your cluster.
Answer: A
Explanation:
Reference:
https://cloud.google.com/anthos/docs/tutorials/manage-slos
A. Create a second GKE cluster in asia-southeast1, and expose both APIs using a Service of type LoadBalancer.
Add the public IPs to the Cloud DNS zone.
B. Use a global HTTP(s) load balancer with Cloud CDN enabled.
C. Create a second GKE cluster in asia-southeast1, and use kubemci to create a global HTTP(s) load balancer.
D. Increase the memory and CPU allocated to the application in the cluster.
Answer: C
Explanation:
C is correct, however, this question is an old question and need to be updated to use the ingress for global
HTTPS LB
A. Create an instance template with the smallest available machine type, and use an image of the third-party
application taken from a current on-premises virtual machine. Create a managed instance group that uses
average CPU utilization to autoscale the number of instances in the group. Modify the average CPU utilization
threshold to optimize the number of instances running.
B. Create an App Engine flexible environment, and deploy the third-party application using a Dockerfile and a
custom runtime. Set CPU and memory options similar to your application's current on-premises virtual machine
in the app.yaml file.
C. Create multiple Compute Engine instances with varying CPU and memory options. Install the Cloud
Monitoring agent, and deploy the third-party application on each of them. Run a load test with high traffic
levels on the application, and use the results to determine the optimal settings.
D. Create a Compute Engine instance with CPU and memory options similar to your application's current on-
premises virtual machine. Install the Cloud Monitoring agent, and deploy the third-party application. Run a load
test with normal traffic levels on the application, and follow the Rightsizing Recommendations in the Cloud
Console.
Answer: D
Explanation:
A, application may not support horizontal scaling and may not run in instances whith small cpuB, dockerize
third-party applications is not a requirement....Complex and costlyC, too expensiveD, simple and works
Answer: C
Explanation:
Going by definition- VPC Service Controls improves your ability to mitigate the risk of data exfiltration from
Google Cloud services such as Cloud Storage and BigQuery. hence C is correct
A. Add the node group name as a network tag when creating Compute Engine instances in order to host each
workload on the correct node group.
B. Add the node name as a network tag when creating Compute Engine instances in order to host each
workload on the correct node.
C. Use node affinity labels based on the node group name when creating Compute Engine instances in order to
host each workload on the correct node group.
D. Use node affinity labels based on the node name when creating Compute Engine instances in order to host
each workload on the correct node.
Answer: D
Explanation:
Y'all not reading the fine details. The question is about aligning EACH client to their dedicated nodes (D), not
to a node group (C).
https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes#default_affinity_labels
The above reference clearly articulates the default affinity label for node group and node name. Unless we're
thinking about growing each client to their own dedicated node groups (not in the current requirement), then
the answer is not C, rather D.
Compute Engine assigns two default affinity labels to each node:
Key: compute.googleapis.com/node-group-name
Key: compute.googleapis.com/node-name
https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes#default_affinity_labels
A. Google Compute Engine unmanaged instance groups and Network Load Balancer
B. Google Compute Engine managed instance groups with auto-scaling
C. Google Cloud Dataproc to run Apache Hadoop jobs to process each test
D. Google App Engine with Google StackDriver for logging
Answer: B
Explanation:
Google Compute Engine enables users to launch virtual machines (VMs) on demand. VMs can be launched
from the standard images or custom images created by users.
Managed instance groups offer autoscaling capabilities that allow you to automatically add or remove
instances from a managed instance group based on increases or decreases in load. Autoscaling helps your
applications gracefully handle increases in traffic and reduces cost when the need for resources is lower.
Incorrect Answers:
B: There is no mention of incoming IP data traffic for the custom C++ applications.
C: Apache Hadoop is not fit for testing C++ applications. Apache Hadoop is an open-source software
framework used for distributed storage and processing of datasets of big data using the MapReduce
programming model.
D: Google App Engine is intended to be used for web applications.
Google App Engine (often referred to as GAE or simply App Engine) is a web framework and cloud computing
platform for developing and hosting web applications in Google-managed data centers.
Reference:
https://cloud.google.com/compute/docs/autoscaler/
A. Help the engineer to convert his websocket code to use HTTP streaming
B. Review the encryption requirements for websocket connections with the security team
C. Meet with the cloud operations team and the engineer to discuss load balancer options
D. Help the engineer redesign the application to use a distributed user session service that does not rely on
websockets and HTTP sessions.
Answer: C
Explanation:
Google Cloud Platform (GCP) HTTP(S) load balancing provides global load balancing for HTTP(S) requests
destined for your instances.
The HTTP(S) load balancer has native support for the WebSocket protocol.
Incorrect Answers:
A: HTTP server push, also known as HTTP streaming, is a client-server communication pattern that sends
information from an HTTP server to a client asynchronously, without a client request. A server push
architecture is especially effective for highly interactive web or mobile applications, where one or more
clients need to receive continuous information from the server.
Reference:
https://cloud.google.com/compute/docs/load-balancing/http/
A. ¢ Append metadata to file body ¢ Compress individual files ¢ Name files with serverName " Timestamp ¢
Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save
files to existing bucket.
B. ¢ Batch every 10,000 events with a single manifest file for metadata ¢ Compress event files and manifest file
into a single archive file ¢ Name files using serverName " EventSequence ¢ Create a new bucket if bucket is
older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to
existing bucket.
C. ¢ Compress individual files ¢ Name files with serverName " EventSequence ¢ Save files to one bucket ¢ Set
custom metadata headers for each object after saving
D. ¢ Append metadata to file body ¢ Compress individual files ¢ Name files with a random prefix pattern ¢ Save
files to one bucket
Answer: D
Explanation:
answer is D
https://cloud.google.com/storage/docs/request-rate#naming-convention
"A longer randomized prefix provides more effective auto-scaling when ramping to very high read and write
rates. For example, a 1-character prefix using a random hex value provides effective auto-scaling from the
initial 5000/1000 reads/writes per second up to roughly 80000/16000 reads/writes per second, because the
prefix has 16 potential values. If your use case does not need higher rates than this, a 1-character randomized
prefix is just as effective at ramping up request rates as a 2-character or longer randomized prefix."
Example:
my-bucket/2fa764-2016-05-10-12-00-00/file1
my-bucket/5ca42c-2016-05-10-12-00-00/file2
my-bucket/6e9b84-2016-05-10-12-00-01/file3
Answer: C
Explanation:
Incorrect Answers:
A: To use the Stackdriver alerting console we must first set up alerting policies.
B: Data access logs only contain read-only operations.
Audit logs help you determine who did what, where, and when.
Cloud Audit Logging returns two types of logs:
✑ Admin activity logs
✑ Data access logs: Contains log entries for operations that perform read-only operations do not modify any
data, such as get, list, and aggregated list methods.
A. Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine
instance in the US-East region.
B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual
machine instance in the US-East region.
C. Create an image file from the root disk with Linux dd command, create a new virtual machine instance in the
US-East region
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and
create a new virtual machine instance in the US-East region using the image file the root disk.
Answer: D
Explanation:
D is correct. A and B are talking about appending the file system to a new VM, not setting it at the root in a
new VM set. Option C is not offered within the GCP because the image must be on the GCP platform to run the
gcloud of Google Console instructions to create a VM with the image.
A. Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots.
B. Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the
backup to Google Cloud Storage.
C. Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups
to the mounted location using mysqldump.
D. Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use
LVM to create snapshots to send to Cloud Storage
Answer: B
Explanation:
Ans: B
Persistent Disk snapshot not required: "They need to take backups of a specific database at regular intervals."
"The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk
performance."
This can be achieved by using both Local SSD & GCS Fuse (mounting GCS as directory), but as the question
stats needs to complete as quickly as possible.
General Rule: Any addition of components introduce a latency. I could not get write throughput of GCS &
Local SSD, even if we consider both provides same throughput, streaming data through network to GCS
Bucket introduce latency. Attached Local SSD has advantage in this case, since there is no network involved.
From Local SSD to GCS bucket - copy job does not impact the mysql data disk.
A. Ensure that the load tests validate the performance of Cloud Bigtable
B. Create a separate Google Cloud project to use for the load-testing environment
C. Schedule the load-testing tool to regularly run against the production environment
D. Ensure all third-party systems your services use is capable of handling high load
E. Instrument the production services to record every transaction for replay by the load-testing tool
F. Instrument the load-testing tool and the target services with detailed logging and metrics collection
Answer: ABF
Explanation:
A:Run your typical workloads against Bigtable :Always run your own typical workloads against a Bigtable
cluster when doing capacity planning, so you can figure out the best resource allocation for your applications.
B. Create a separate Google Cloud project to use for the load-testing environment
F : The most important/standard factor of testing, you gather logs and metrics in TEST environment for
further scaling.
Answer: B
Explanation:
A is not correct because Project owner is too broad. The security team does not need to be able to make
changes to projects.
B is correct because:-Org viewer grants the security team permissions to view the organization's display
name.
-Project viewer grants the security team permissions to see the resources within projects.
C is not correct because Org admin is too broad. The security team does not need to be able to make changes
to the organization.
D is not correct because Project owner is too broad. The security team does not need to be able to make
changes to projects.
Answer: BE
Explanation:
B&E
Code signing only verifies the author. In other words it only check who you are, but not what have you done
A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container
clusters resize CLUSTER_Name " -size 10
B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags
INSTANCE - -tags enable- autoscaling max-nodes-10
C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters
update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create
mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
Answer: C
Explanation:
A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
B. Use a Google Container Engine cluster to serve the website and store data to persistent disk.
C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
D. Use a single Compute Engine virtual machine (VM) to host a web server, backend by Google Cloud SQL.
Answer: AC
Explanation:
Reference:
https://cloud.google.com/storage-options/
Answer: BC
Explanation:
B: With Container Engine, Google will automatically deploy your cluster for you, update, patch, secure the
nodes.
Kubernetes Engine's cluster autoscaler automatically resizes clusters based on the demands of the
workloads you want to run.
C: Solutions like Datastore, BigQuery, AppEngine, etc are truly NoOps.
App Engine by default scales the number of instances running up and down to match the load, thus providing
consistent performance for your app at all times while minimizing idle instances and thus reducing cost.
Note: At a high level, NoOps means that there is no infrastructure to build out and manage during usage of the
platform. Typically, the compromise you make with
NoOps is that you lose control of the underlying infrastructure.
Reference:
https://www.quora.com/How-well-does-Google-Container-Engine-support-Google-Cloud-Platform%E2%80%
99s-NoOps-claim
Answer: C
Explanation:
C (Correct answer) - Digitally sign each timestamp and log entry and store the signature.
Answer A, B, and D don’t have any added value to verify the authenticity of your logs. Besides, Logs are mostly
suitable for exporting to Cloud storage, BigQuery, and PubSub. SQL database is not the best way to be
exported to nor store log data.
Simplified Explanation
To verify the authenticity of your logs if they are tampered with or forged, you can use a certain algorithm to
generate digest by hashing each timestamp or log entry and then digitally sign the digest with a private key to
generate a signature. Anybody with your public key can verify that signature to confirm that it was made with
your private key and they can tell if the timestamp or log entry was modified. You can put the signature files
into a folder separate from the log files. This separation enables you to enforce granular security policies.
A. 1. Create a second Google Workspace account and Organization. 2. Grant all developers the Project Creator
IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies
for all projects on both Organizations. 5. Additionally, set the production policies on the original Organization.
B. 1. Create a folder under the Organization resource named Production. 2. Grant all developers the Project
Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the
policies for all projects on the Organization. 5. Additionally, set the production policies on the Production folder.
C. 1. Create folders under the Organization resource named Development and Production. 2. Grant all
developers the Project Creator IAM role on the Development folder. 3. Move the developer projects into the
Development folder. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production
policies on the Production folder.
D. 1. Designate the Organization for production projects only. 2. Ensure that developers do not have the Project
Creator IAM role on the Organization. 3. Create development projects outside of the Organization using the
developer Google Workspace accounts. 4. Set the policies for all projects on the Organization. 5. Additionally,
set the production policies on the individual production projects.
Answer: C
Explanation:
A. 1. Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engine instances. 2. Serve music
files directly from the backend Compute Engine instance.
B. 1. Create a Cloud Filestore NFS volume and attach it to the backend Compute Engine instances. 2. Download
popular songs in Cloud Filestore. 3. Serve music files directly from the backend Compute Engine instance.
C. 1. Copy popular songs into CloudSQL as a blob. 2. Update application code to retrieve data from CloudSQL
when Cloud Storage is overloaded.
D. 1. Create a managed instance group with Compute Engine instances. 2. Create a global load balancer and
configure it with two backends: ‹—גManaged instance group ‹—גCloud Storage bucket 3. Enable Cloud CDN
on the bucket backend.
Answer: D
Explanation:
Do not trust the official answers here, D is correct. In special for this question, never use gcsfuse in
production. Performance is bad and reliability is trashy - Google states it themselves.
A is wrong because you can't be serving files directly from Compute Engine instance.GCS + CDN is best
option
A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to
save.
B. Enable the Compute Engine API, and then enable logging on the firewall rules that match the traffic you
want to save.
C. Set up a Cloud Logging Dashboard titled Cloud VPN Logs, and then add a chart that queries for the VPN
metrics over a one-year time period.
D. Set up a filter in Cloud Logging and a topic in Pub/Sub to publish the logs.
Answer: A
Explanation:
Reference:
https://cloud.google.com/network-connectivity/docs/vpn/how-to/viewing-logs-metrics
" target="_blank" style="word-break: break-all;">
A. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, use the
Cloud Data Loss Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery.
B. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, store all
non-PII data in BigQuery and store all PII data in a Cloud Storage bucket that has a retention policy set.
C. Ask the external partners to upload all data on Cloud Storage. Configure Bucket Lock for the bucket. Create
a Dataflow pipeline to read the data from the bucket. As part of the pipeline, use the Cloud Data Loss
Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery.
D. Ask the external partners to import all data in your BigQuery dataset. Create a dataflow pipeline to copy the
data into a new table. As part of the Dataflow bucket, skip all data in columns that have PII data
Answer: A
Explanation:
Option C seems to be an option, but there are two non-conformities there. In addition to storing personal data
in the GCS, it is being improperly retained.
A. Create an aggregated export on the Production folder. Set the log sink to be a Cloud Storage bucket in an
operations project.
B. Create an aggregated export on the Organization resource. Set the log sink to be a Cloud Storage bucket in
an operations project.
C. Create log exports in the production projects. Set the log sinks to be a Cloud Storage bucket in an operations
project.
D. Create log exports in the production projects. Set the log sinks to be BigQuery datasets in the production
projects, and grant IAM access to the operations team to run queries on the datasets.
Answer: A
Explanation:
A is more likely, because clearly it's stated in the question that they are interested in "Production"
folder/projects logging, not the entire organization.
A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage
bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4.
Configure a retention policy at the bucket level using bucket lock.
B. 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a
sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a
Coldline Cloud Storage bucket after one month.
C. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery
table. 3. Set a time_partitioning_expiration of 30 days.
D. 1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set
a time_partitioning_expiration of 30 days.
Answer: A
Explanation:
The answer is A.
The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging
agent and send them to Cloud Logging.
The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.
The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires
setting up a lifecycle based on the storage period.
In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be
stored for a longer period of time for auditing purposes.
If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move
the data past 30 days to Coldline to build a cost-optimal solution.
Create a sync that exports the logs to the region's Cloud Storage bucket.
3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.
Answer: A
Explanation:
https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Answer: C
Explanation:
https://cloud.google.com/bigtable/docs/schema-design#row-keys
A. 1. From the dataset where you have the source data, create views of tables that you want to share, excluding
PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign access
controls to the dataset that contains the view.
B. 1. From the dataset where you have the source data, create materialized views of tables that you want to
share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3.
Assign access controls to the dataset that contains the view.
C. 1. Create a dataset for the data science team. 2. Create views of tables that you want to share, excluding PII.
3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access
controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
D. 1. Create a dataset for the data science team. 2. Create materialized views of tables that you want to share,
excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4.
Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source
dataset.
Answer: C
Explanation:
Reference:
https://cloud.google.com/blog/topics/developers-practitioners/bigquery-admin-reference-guide-data-govern
ance?skip_cache=true
" target="_blank" style="word-break: break-all;">
Answer: B
Explanation:
Reference:
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets
" target="_blank" style="word-break: break-all;">
Question: 181 CertyIQ
You have a Compute Engine managed instance group that adds and removes Compute Engine instances from the
group in response to the load on your application. The instances have a shutdown script that removes REDIS
database entries associated with the instance. You see that many database entries have not been removed, and
you suspect that the shutdown script is the problem. You need to ensure that the commands in the shutdown script
are run reliably every time an instance is shut down. You create a Cloud Function to remove the database entries.
What should you do next?
A. Modify the shutdown script to wait for 30 seconds before triggering the Cloud Function.
B. Do not use the Cloud Function. Modify the shutdown script to restart if it has not completed in 30 seconds.
C. Set up a Cloud Monitoring sink that triggers the Cloud Function after an instance removal log message
arrives in Cloud Logging.
D. Modify the shutdown script to wait for 30 seconds and then publish a message to a Pub/Sub queue.
Answer: C
Explanation:
C is the answer as shutdown script is run based on best effort and not a reliable method.
https://cloud.google.com/compute/docs/shutdownscript#limitations
Compute Engine executes shutdown scripts only on a best-effort basis. In rare cases, Compute Engine cannot
guarantee that the shutdown script will complete.
A. Use Google Cloud Shell in the Google Cloud Console to interact with Google Cloud.
B. Create a Compute Engine instance and install gcloud on the instance. Connect to this instance via SSH to
always use the same gcloud installation when interacting with Google Cloud.
C. Install gcloud on all of your workstations. Run the command gcloud components auto-update on each
workstation
D. Use a package manager to install gcloud on your workstations instead of installing it manually.
Answer: A
Explanation:
Reference:
https://cloud.google.com/sdk/gcloud
Answer: C
Explanation:
VPC peering cannot be established between VPCs if there is IP range overlap. C is ok since you can establish
VPN across these VPCs and only include the applications required IP ranges as its mentioned that they do not
overlap
A. Inspect the logs and metrics from the instances in Cloud Logging and Cloud Monitoring.
B. Change the Compute Engine Instances behind the application to a machine type with more CPU and memory.
C. Restore a backup of the application database from a time before the application became slow.
D. Deploy the applications on a managed instance group with autoscaling enabled. Add a load balancer in front
of the managed instance group, and have the users connect to the IP of the load balancer.
Answer: A
Explanation:
Agree with A.First remove any non possible answers: B / C.Then we have A or D left. But D does a good action /
recommended action but it says "what do we do first" which is always troubleshoot.
Answer: A
Explanation:
A: Configuring the right liveness and readiness probes prevents outages when rolling out a new ReplicaSet of
a Deployment, because Pods are only getting traffic when they are considered ready.B: With GKE, you do not
deal with MIGs.C: Does not use GKE tools and is therefore not the best option.D: Does alert you but does not
prevent the outage.
A. Create a second GKE cluster for the batch workloads only. Allocate the 200 original nodes across both
clusters.
B. Configure CPU and memory limits on the namespaces in the cluster. Configure all Pods to have a CPU and
memory limits.
C. Configure a HorizontalPodAutoscaler for all stateless workloads and for all compatible stateful workloads.
Configure the cluster to use node auto scaling.
D. Change the node pool to use preemptible VMs.
Answer: C
Explanation:
A: Is not necessary because you can have multiple node pools with different configurations.B: Optimizes
resource usage of CPU/memory in your existing node pool but does not necessarily improve cost - still an
option that should be considered.C: This looks really good. Autoscaling workloads and the node pools makes
your whole infrastructure more elastic and gives you the option to rely on the same node pool.D: This might
not be a good option for every type of workload. Batch and stateless workloads can often handle this quite
well, but stateful workloads are not well-suited for operation on preemptible VMs.Since only one answer is
accepted, I'll choose C.
C is the correct answer as it doesn't involve major changes to the current Kubernetes configuration
A. 1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch
a query. 2. Open the Billing page of the project. 3. Select Reports. 4. Select BigQuery as the product and filter
by the user you want to check.
B. 1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query
on the generated table to extract the information you need.
C. 1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage. 2. Develop a Dataflow
pipeline to compute the cost of queries split by users.
D. 1. Activate billing export into BigQuery. 2. Perform a BigQuery query on the billing table to extract the
information you need.
Answer: B
Explanation:
B is my answer.https://cloud.google.com/blog/products/data-analytics/taking-a-practical-approach-to-
bigquery-cost-monitoring
Answer: A
Explanation:
https://cloud.google.com/vpc/docs/vpc-peering
Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud
(VPC) networks regardless of whether they belong to the same project or the same organization.
Answer: B
Explanation:
Reference:
https://cloud.google.com/storage/docs/object-versioning
Answer: A
Explanation:
gauge: the autoscaler computes the average value of the data collected in the last couple of minutes and
compares that to the utilization target.
delta-per-minute: the autoscaler calculates the average rate of growth per minute and compares that to the
utilization target.
delta-per-second: the autoscaler calculates the average rate of growth per second and compares that to the
utilization target. For accurate comparisons, if you set the utilization target in seconds, use delta-per-second
as the target type. Likewise, use delta-per-minute for a utilization target in minutes.
Answer: A
Explanation:
A is true only if the on-prem (peer) gateway has two separate external P addresses. The HA VPN gateway
uses two tunnels, one tunnel to each external IP address on the peer device as described in
https://cloud.google.com/network-
connectivity/docs/vpn/concepts/topologies#configurations_that_support_9999_availability
C is a complete solution that provides full redundancy of the on-prem gateway. This is probably more
expensive and having two HA VPN Gateways is an unusual configuration as the online documentation only
describes using one HA VPN Gateway
A.1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine
application is an allowed origin.
2. Use the Cloud Storage Signed URL feature to generate a POST URL.
B.1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine
application is an allowed origin.
2. Assign the Cloud Storage WRITER role to users who upload files.
C.1. Use the Cloud Storage Signed URL feature to generate a POST URL.
2. Use App Engine default credentials to sign requests against Cloud Storage.
D.1. Assign the Cloud Storage WRITER role to users who upload files.
2. Use App Engine default credentials to sign requests against Cloud Storage.
Answer: A
Explanation:
"Cloud Storage supports this specification by allowing you to configure your buckets to support CORS.
Continuing the above example, you can configure the example.storage.googleapis.com bucket so that a
browser can share its resources with scripts from example.appspot.com."
Reference:
https://cloud.google.com/storage/docs/cross-origin#server-side-support
To download updated packages, instances must connect to a public repository outside the boundaries of Google
Cloud. You need to allow sub-b to access the external repository. What should you do?
Answer: B
Explanation:
Cloud NAT allows the resources in a private subnet to access the internet—for updates, patching, config
management, and more—in a controlled and efficient manner.
A.1. Create an image of the on-premises virtual machines and upload into Cloud Storage.
2. Import the image as a virtual disk on Compute Engine.
B.1. Create standard instances on Compute Engine.
2. Select as the OS the same Microsoft Windows version that is currently in use in the on-premises
environment.
C.1. Create an image of the on-premises virtual machine.
2. Import the image as a virtual disk on Compute Engine.
3. Create a standard instance on Compute Engine, selecting as the OS the same Microsoft Windows version
that is currently in use in the on-premises environment.
4. Attach a data disk that includes data that matches the created image.
D.1. Create an image of the on-premises virtual machines.
2. Import the image as a virtual disk on Compute Engine using --os=windows-2022-dc-v.
3. Create a sole-tenancy instance on Compute Engine that uses the imported disk as a boot disk.
Answer: D
Explanation:
3. Create a sole-tenancy instance on Compute Engine that uses the imported disk as a boot disk.
Reference:
https://cloud.google.com/compute/docs/import/importing-virtual-disks
You need to design the connectivity between the locations to meet the business requirements. What should you
provision?
A.An HA Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway.
B.A Classic Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway.
C.Two HA Cloud VPN gateways connected to two on-premises VPN gateways. Configure each HA Cloud VPN
gateway to have two tunnels, each connected to different on-premises VPN gateways.
D.A Classic Cloud VPN gateway connected with one tunnel to an on-premises VPN gateway.
Answer: A
Explanation:
An HA Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway.
A.Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
B.Use the Data Transfer appliance to perform an offline migration.
C.Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into
Cloud Storage.
D.Upload the data with gcloud storage cp.
Answer: B
Explanation:
A. Create two G Suite accounts to manage users: one for development/test/staging and one for production.
Each account should contain one project for every application
B. Create two G Suite accounts to manage users: one with a single project for all development applications and
one with a single project for all production applications
C. Create a single G Suite account to manage users with each stage of each application in its own project
D. Create a single G Suite account to manage users with one project for the development/test/staging
environment and one project for the production environment
Answer: C
Explanation:
For segregation of applications and environments, C is the best reference architecture model
A. Delete the virtual machine (VM) and disks and create a new one
B. Delete the instance, attach the disk to a new VM, and investigate
C. Take a snapshot of the disk and connect to a new machine to investigate
D. Check inbound firewall rules for the network the machine is connected to
E. Connect the machine to another network with very simple firewall rules and investigate
F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and
investigate
Answer: CDF
Explanation:
D: Handling "Unable to connect on port 22" error message
Possible causes include:
✑ There is no firewall rule allowing SSH access on the port. SSH access on port 22 is enabled on all Compute
Engine instances by default. If you have disabled access, SSH from the Browser will not work. If you run sshd
on a port other than 22, you need to enable the access to that port with a custom firewall rule.
✑ The firewall rule allowing SSH access is enabled, but is not configured to allow connections from GCP
Console services. Source IP addresses for browser- based SSH sessions are dynamically allocated by GCP
Console and can vary from session to session.
F: Handling "Could not connect, retrying..." error
You can verify that the daemon is running by navigating to the serial console output page and looking for
output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do
not see these output prefixes in the serial console output, the daemon might be stopped. Reboot the instance
to restart the daemon.
Reference:
https://cloud.google.com/compute/docs/ssh-in-browser
https://cloud.google.com/compute/docs/ssh-in-browser
A. Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs)
B. Authenticate the on-premises infrastructure with a user account and provision service account keys for the
VMs
C. Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP)
managed keys for the VMs
D. Deploy a custom authentication service on GCE/Google Kubernetes Engine (GKE) for the on-premises
infrastructure and use GCP managed keys for the VMs
Answer: C
Explanation:
Migrating data to Google Cloud Platform
Let's say that you have some data processing that happens on another cloud provider and you want to
transfer the processed data to Google Cloud Platform. You can use a service account from the virtual
machines on the external cloud to push the data to Google Cloud Platform. To do this, you must create and
download a service account key when you create the service account and then use that key from the external
process to call the Cloud Platform APIs.
Reference:
https://cloud.google.com/iam/docs/understanding-service-accounts#migrating_data_to_google_cloud_platfo
rm
Answer: C
Explanation:
C.It says Guarantee service availability, we need to check for error rates to make sure our application is
working perfectly fine.
error and latency will cover technical and visits buissness hence C
Answer: ACF
Explanation:
Single VPN tunnel limits throughput. Copying 20TB across long distances is a big bottleneck. VPN across
internet cannot be relied upon for high performance
A. Cloud Spanner
B. Google BigQuery
C. Google Cloud SQL
D. Google Cloud Datastore
Answer: D
Explanation:
Common workloads for Google Cloud Datastore:
✑ User profiles
✑ Product catalogs
✑ Game state
Reference:
https://cloud.google.com/storage-options/
https://cloud.google.com/datastore/docs/concepts/overview
A. Store the card data in Secret Manager after running a query to identify duplicates.
B. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.
C. Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.
D. Use column-level encryption to store the data in Cloud SQL.
Answer: B
Explanation:
B, but should be reworded as follows for clarify."B. Encrypt the card data with a deterministic algorithm and
store in Firestore using Datastore mode."https://cloud.google.com/architecture/tokenizing-sensitive-
cardholder-data-for-pci-dss#a_service_for_handling_sensitive_information
C.
D.
Answer: D
Explanation:
Is D:
https://cloud.google.com/armor/docs/configure-security-policies#gcloud_11
--security-policy my-policy \
--expression "evaluatePreconfiguredExpr('sourceiplist-fastly')" \
--action "allow"
"
A. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.
B. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.
C. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.
D. Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.
Answer: C
Explanation:
C is okNeeds to be triggered by the deployment and not on a schedule. Cloud storage doesn't seem relevant
in the context of the question
2. IMHO, the question is not clear. Is it a git or object repository. If its git repository than there need to be a
logging or webhook that triggers the cloud function.. benefit of doubt goes to C.
Answer: A
Explanation:
AI Explanations helps you understand your model's outputs for classification and regression tasks. Whenever
you request a prediction on AI Platform, AI Explanations tells you how much each feature in the data
contributed to the predicted result. You can then use this information to verify that the model is behaving as
expected, recognize bias in your models, and get ideas for ways to improve your model and your training data.
Reference:
https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/preparing-metadata
A. Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data
by season and event.
B. Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data using
season as a primary key.
C. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on season.
D. Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use
separate database instances for each season.
Answer: C
Explanation:
C. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on season.
Reference:
https://cloud.google.com/bigquery/public-data
A. Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for
analysis.
B. Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.
C. Use the gcloud recommender command to list the idle virtual machine instances.
D. From the Google Console, identify which Compute Engine instances in the managed instance groups are no
longer responding to health check probes.
Answer: C
Explanation:
C. Use the gcloud recommender command to list the idle virtual machine instances.
Reference:
https://cloud.google.com/compute/docs/instances/viewing-and-applying-idle-vm-recommendations
A. Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.
B. Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.
C. Use Firebase Authentication for EHR's user facing applications.
D. Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.
E. Use GKE private clusters for all Kubernetes workloads.
Answer: AB
Explanation:
A. Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.
B. Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.
Question: 210 CertyIQ
For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for
securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed
using Google Cloud services. What should you do? (Choose two.)
A. Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.
B. Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.
C. Configure Container Registry to only allow trusted service accounts to create and deploy containers from the
registry.
D. Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before
deploying the workload.
Answer: AD
Explanation:
A&D
To ensure deployment are secure and and consistent, automatically scan images for vulnerabilities with
container analysis (https://cloud.google.com/docs/ci-cd/overview?hl=en&skip_cache=true)
Answer: A
Explanation:
Answer: D
Explanation:
* Provide a secure and high-performance connection between on-premises systems and Google Cloud.
C. - Direct peering is used for workspace, instead of DMZ, again - not suitable.
The answer to choosing A or D lies in the question, stating: "You want to follow Google's recommended
practices for production-level applications."
Google recommends using the 99.99% SLA interconnect (dedicated or partner) for production-level
applications as stated here:
https://cloud.google.com/network-connectivity/docs/interconnect/tutorials/production-level-overview
The answer is D.
Answer: C
Explanation:
C is better option, even though increasing total timeout would help reduce timeout errors but remember that
that in this case we are getting too many messages from the server since load increased and we need to
reduce latency
A. Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend
Compute Engine instances.
B. Revoke the compute.networkAdmin role from all users in the project with front end instances.
C. Create an Identity and Access Management (IAM) policy that maps the IT staff to the
compute.networkAdmin role for the organization.
D. Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the
compute.addresses.create permission.
Answer: A
Explanation:
A. Use a private cluster with a private endpoint with master authorized networks configured.
B. Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.
C. Use a private cluster with a public endpoint with master authorized networks configured.
D. Use a public cluster with master authorized networks enabled and firewall rules.
Answer: A
Explanation:
I'll go with A as it is the most secure option. C would be more cost-effective for when EHR has no plans for
Cloud Interconnect / VPN (which they do!).
Answer: A
Explanation:
From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application
servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity
Answer: A
Explanation:
From scenario: Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro
Answer: C
Explanation:
If the Question is erroneously formulated, and they mean Google Container Registry and Google Kubernetes
Engine, then C is the right answer
Question: 219 CertyIQ
Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new
feature, which suddenly became very popular. A record number of users are trying to use the service, but many of
them are getting 503 errors and very slow response times. What should they investigate first?
Answer: B
Explanation:
503 is service unavailable error. If the database was online everyone would get the 503 error.
A. Create a project for development and test and another for staging and production
B. Create a network for development and test and another for staging and production
C. Create one subnetwork for development and another for staging and production
D. Create one project for development, a second for staging and a third for production
Answer: D
Explanation:
In the requirement, the staging environment needs access to production, not the other way around. Answer A
could allow staging and production to access each other. In answer D, staging and production are in different
project, you can limit the access from either side. So D is correct.
Answer: B
Explanation:
Ingest millions of streaming events per second from anywhere in the world with Cloud Pub/Sub, powered by
Google's unique, high-speed private network. Process the streams with Cloud Dataflow to ensure reliable,
exactly-once, low-latency data transformation. Stream the transformed data into BigQuery, the cloud-native
data warehousing service, for immediate analysis via SQL or popular visualization tools.
From scenario: They plan to deploy the game's backend on Google Compute Engine so they can capture
streaming metrics, run intensive analytics.
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users' mobile devices
6. Use only fully managed services
Reference:
https://cloud.google.com/solutions/big-data/stream-analytics/
A. Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.
B. Write a schema migration plan to denormalize data for better performance in BigQuery.
C. Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.
D. Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against
the full dataset to confirm that they complete successfully.
E. Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud
Storage.
Answer: AB
Explanation:
Correct Answer A, B
Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow
Write a schema migration plan to denormalize data for better performance in BigQuery.
Reference
https://cloud.google.com/bigquery/docs/loading-data#loading_denormalized_nested_and_repeated_data
Answer: D
Explanation:
A. Store as much analytics and game activity data as financially feasible today so it can be used to train
machine learning models to predict user behavior in the future.
B. Begin packaging their game backend artifacts in container images and running them on Google Kubernetes
Engine to improve the ability to scale up or down based on game activity.
C. Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve
development velocity.
D. Adopt a schema versioning tool to reduce downtime when adding new game features that require storing
additional player data in the database.
E. Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical
kernel patches and package updates and reduce the risk of 0-day vulnerabilities.
Answer: AB
Explanation:
the answers are A & B. A: It makes sense to collected as much data points in order to have adquate data for
ML models B: Google recommends microservices & having containerized images , if they already have
containre images , it would be easier to push to gcr & then run on GKEC:Cloud Build is the Google Solution for
CI/CD and requires Dockerfile , hence CI/CD piepline code for Jenkinswill not be useful in futireD: schema
versioning tool is not related for future google servicesE: Google does the patching on all VMS so its not
relevant
A. Deploy failure injection software to the game analytics platform that can inject additional latency to mobile
client analytics traffic.
B. Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and
run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.
C. Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded
from mobile devices.
D. Create an opt-in beta of the game that runs on players' mobile devices and collects response times from
analytics endpoints running in Google Cloud Platform regions all over the world.
Answer: A
Explanation:
1. A seems right
2. A is ok to some extent.
A. Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.
B. Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.
C. Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.
D. Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for
historical data queries.
Answer: D
Explanation:
Correct Answer D
Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for
historical data queries.
Storing time-series data in Cloud Bigtable is a natural fit, Cloud Spanner scales horizontally and serves data
with low latency while maintaining transactional consistency and industry-leading 99.999% (five 9s)
availability - 10x less downtime than four nines (<5 minutes per year). Cloud Spanner helps future-proof your
database backend. After you load your data into BigQuery, you can query the data in your tables. BigQuery
supports two types of queries: Interactive queries, Batch queries
A. Cloud Bigtable
B. Cloud Spanner
C. BigQuery
D. Cloud Datastore
Answer: A
Explanation:
Cloud Bigtable
A. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance
group. Use an L4 load balancer.
B. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance
group. Use an L4 load balancer.
C. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance
group. Use an L7 load balancer.
D. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance
group. Use an L7 load balancer.
Answer: C
Explanation:
C – Create an instance template for the backend. For every region, deploy it on multi-zone managed instance
group. Use an L7 load balancer.
Requirements ask to minimize downtime – so needs redundancy across regions and zones inside regions. In
addition, GCP Best Practicies of recommends HTTPs (L7) Load Balancer for internet facing app, to design for
High-Availability. Quote from this page:
“GCP offers several variations of load balancing. The HTTP(S) load balancer is often used to expose internet-
facing apps. This load balancer provides global balancing, allowing distribution of load across regions in
different geographies. If a zone or region becomes unavailable, the load balancer directs traffic to a zone with
available capacity. For more details, see application capacity optimizations with global load balancing”.
Answer: B
Explanation:
Reference:
https://cloud.google.com/storage/docs/gsutil/commands/cp
Question: 230 CertyIQ
You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game programmatic
access to a legacy game's Firestore database.
Access should be as restricted as possible. What should you do?
A. Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's
IAM page, and then give the Organization Admin role to both SAs.
B. Create a service account (SA) in the legacy game's Google Cloud project, give the SA the Organization Admin
role, and then give it the Firebase Admin role in both projects.
C. Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM
page, and then give it the Firebase Admin role in both projects.
D. Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and
then migrate the new game to the legacy game's project.
Answer: C
Explanation:
C. Create a service account (SA) in the legacy game™s Google Cloud project, add this SA in the new game™s
IAM page, and then give it the Firebase Admin role in both projects.
Answer: A
Explanation:
A is correct .
You can limit the physical location of a new resource with the Organization Policy Service resource locations
constraint. You can use the location property of a resource to identify where it is deployed and maintained by
the service. For data-containing resources of some Google Cloud services, this property also reflects the
location where data is stored. This constraint allows you to define the allowed Google Cloud locations where
the resources for supported services in your hierarchy can be created.
After you define resource locations, this limitation will apply only to newly-created resources. Resources you
created before setting the resource locations constraint will continue to exist and perform their function.
https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations
Question: 232 CertyIQ
You need to implement a network ingress for a new game that meets the defined business and technical
requirements. Mountkirk Games wants each regional game instance to be located in multiple Google Cloud
regions. What should you do?
A. Configure a global load balancer connected to a managed instance group running Compute Engine
instances.
B. Configure kubemci with a global load balancer and Google Kubernetes Engine.
C. Configure a global load balancer with Google Kubernetes Engine.
D. Configure Ingress for Anthos with a global load balancer and Google Kubernetes Engine.
Answer: D
Explanation:
The confusing thing here is that GCP has renamed the same solution multiple times. The concept is "Multi
Cluster Ingress (MCI)", and kubemci was the original solution for setting this up. Then GCP released "Ingress
for Anthos", which replaced kubemci. Now, they have again renamed "Ingress for Anthos" to "Multi Cluster
Ingress" (because it applies to more than just Anthos). If you see this question in the exam, it should no longer
provide "Ingress for Anthos" as an option, but instead will say something like "Multi Cluster Ingress". The
answers can be found at these links:https://cloud.google.com/kubernetes-engine/docs/concepts/multi-
cluster-ingresshttps://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress
Answer: C
Explanation:
Answer is C. End users don't care or need to know about server uptime, CPU or memory. End- users care about
their user experience.
Service Level Indicators (SLI) is the metrics for the level of service provided to end users. Real numbers of the
performance.
Example: request latency to be less than 500ms in the last 15 minutes with a 95% percentile.
Service Level objective: Target level for the reliability of your service.
SLA - the agreement that you make with the end users.
Question: 234 CertyIQ
Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google Cloud.
You want to streamline the process and follow
Google-recommended practices. What should you do?
A. Configure Workload Identity and service accounts to be used by the application platform.
B. Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the
application platform.
C. Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use
Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be
used by the application platform.
D. Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud Key
Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used by the
application platform.
Answer: A
Explanation:
A is correct .
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
Workload Identity is the recommended way to access Google Cloud services from applications running within
GKE due to its improved security properties and manageability. For information about alternative ways to
access Google Cloud APIs from GKE, refer to the alternatives section below.
A. Upload your mobile app to the Firebase Test Lab, and test the mobile app on Android and iOS devices.
B. Create Android and iOS VMs on Google Cloud, install the mobile app on the VMs, and test the mobile app.
C. Create Android and iOS containers on Google Kubernetes Engine (GKE), install the mobile app on the
containers, and test the mobile app.
D. Upload your mobile app with different configurations to Firebase Hosting and test each configuration.
Answer: A
Explanation:
Correct Answer: A
- Firebase Test Lab is a cloud-based app testing infrastructure that lets you test your app on a range of
devices and configurations, so you can get a better idea of how it'll perform in the hands of live users.
- Firebase Test Lab Run tests on a wide range of Android and iOS devices hosted by Test Lab.
B.
C.
D.
Answer: A
Explanation:
The push endpoint can be a load balancer.
A container cluster can be used.
Cloud Pub/Sub for Stream Analytics
Reference:
https://cloud.google.com/pubsub/
https://cloud.google.com/solutions/iot/
https://cloud.google.com/solutions/designing-connected-vehicle-platform https://cloud.google.com/solutions
/designing-connected-vehicle-platform#data_ingestion http://www.eweek.com/big-data-and-analytics/googl
e-touts-value-of-cloud-iot-core-for-analyzing-connected-car-data https://cloud.google.com/solutions/iot/
A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners
B. Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public
C. Use Google App Engine with the Swagger (Open API Specification) framework. Focus on an API for the public
D. Use Google Container Engine with a Django Python container. Focus on an API for the public
E. Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework.
Focus on an API for dealers and partners
Answer: A
Explanation:
Develop, deploy, protect and monitor your APIs with Google Cloud Endpoints. Using an Open API Specification
or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API
development.
From scenario:
Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus
inventory
Support the dealer network with more data on how their customers use their equipment to better position new
products and services
Have the ability to partner with different companies " especially with seed and fertilizer suppliers in the fast-
growing agricultural business " to create compelling joint offerings for their customers.
Reference:
https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth
Answer: A
Explanation:
Delegate application authorization with OAuth2
Cloud Platform APIs support OAuth 2.0, and scopes provide granular authorization over the methods that are
supported. Cloud Platform supports both service- account and user-account OAuth, also called three-legged
OAuth.
Reference:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_
authorization_with_oauth2 https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps
Answer: B
Explanation:
B is the correct answer, this similar question was in google simple questions
We need to buffer, the default limit of BigQuery is 100 API calls per second, till now this cannot be changed.
Hence we should ease using Pub/Sub so B.
A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning
analysis of metrics
B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning
analysis of metrics
C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine
learning analysis of metrics
D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local
inventory by a fixed factor
Answer: C
Explanation:
The Avro binary format is the preferred format for loading compressed data. Avro data is faster to load
because the data can be read in parallel, even when the data blocks are compressed.
Cloud Storage supports streaming transfers with the gsutil tool or boto library, based on HTTP chunked
transfer encoding. Streaming data lets you stream data to and from your Cloud Storage account as soon as it
becomes available without requiring that the data be first saved to a separate file. Streaming transfers are
useful if you have a process that generates data and you do not want to buffer it locally before uploading it, or
if you want to send the result from a computational pipeline directly into Cloud Storage.
Reference:
https://cloud.google.com/storage/docs/streaming
https://cloud.google.com/bigquery/docs/loading-data
Answer: B
Explanation:
Correct Answer B
From the case study, it can conclude that Management (CXO) all concern rapid provision of resources
(infrastructure) for growing as well as cost management, such as Cost optimization in Infrastructure, trade up
front capital expenditures (Capex) for ongoing operating expenditures (Opex), and Total cost of ownership
(TCO)
A. Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the
ETL process using data in the bucket
B. Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the
data to Multi-Regional buckets in US, EU, and Asia. Run the ETL process using the data in the bucket
C. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and
Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket
D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia
using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket
Answer: C
Explanation:
c)
multi-region is a large geographic area, such as the United States, that contains two or more geographic
places.
A. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job
B. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job
C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-
region bucket and use a Dataproc cluster to finish the job
D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region
bucket and use a Cloud Dataproc cluster to finish the job
Answer: D
Explanation:
D is the correct answer. Regional bucket is required, since multi regional bucket will incur additional cost to
transfer the data to a centralized location.
Question: 244 CertyIQ
TerramEarth has equipped all connected trucks with servers and sensors to collect telemetry data. Next year they
want to use the data to train machine learning models. They want to store this data in the cloud while reducing
costs.
What should they do?
A. Have the vehicle's computer compress the data in hourly snapshots, and store it in a Google Cloud Storage
(GCS) Nearline bucket
B. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in
Google BigQuery
C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in
Cloud Bigtable
D. Have the vehicle's computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket
Answer: D
Explanation:
Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower
availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs. For
example:
Cold Data Storage - Infrequently accessed data, such as data stored for legal or regulatory reasons, can be
stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery - In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low
latency access to data stored as Coldline Storage.
Reference:
https://cloud.google.com/storage/docs/storage-classes
A. Treat every micro service call between modules on the vehicle as untrusted.
B. Require IPv6 for connectivity to ensure a secure address space.
C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
D. Use a functional programming language to isolate code execution cycles.
E. Use multiple connectivity subsystems for redundancy.
F. Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.
Answer: AC
Explanation:
A. Treat every micro service call between modules on the vehicle as untrusted.
C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
A. Have you engineers inspect the data for patterns, and then create an algorithm with rules that make
operational adjustments automatically
B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to
make operational adjustments automatically
C. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging
(GCM) to make operational adjustments automatically
D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google
Cloud Machine Learning (ML) Platform to make operational adjustments automatically
Answer: B
Explanation:
B is correct. only 200k vehicle's are connected so need to run updates locally
A. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud
Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
B. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud
Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36
months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age
condition of 36 months.
D. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36
months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36
months.
Answer: C
Explanation:
Enable a bucket lifecycle management rule to delete objects older than 36 months. Use partitioned tables in
BigQuery and set the partition expiration period to 36 months. is the right answer.
When you create a table partitioned by ingestion time, BigQuery automatically loads data into daily, date-
based partitions that reflect the data's ingestion or arrival time.
And Google recommends you configure the default table expiration for your datasets, configure the
expiration time for your tables, and configure the partition expiration for partitioned tables.
storage#use_the_expiration_settings_to_remove_unneeded_tables_and_partitions
If the partitioned table has a table expiration configured, all the partitions in it are deleted according to the
table expiration settings. For our specific requirement, we could set the partition expiration to 36 months so
that partitions older than 36 months (and the data within) are automatically deleted.
Reference:
https://cloud.google.com/bigquery/docs/partitioned-tables#ingestion_time
https://cloud.google.com/bigquery/docs/managing-partitioned-tables#partition-expiration
https://cloud.google.com/bigquery/docs/best-practices-
A. Create a Cloud Storage lifecycle rule with Age: 30, Storage Class: Standard, and Action: Set to Coldline, and
create a second GCS life-cycle rule with Age: 365, Storage Class: Coldline, and Action: Delete.
B. Create a Cloud Storage lifecycle rule with Age: 30, Storage Class: Coldline, and Action: Set to Nearline, and
create a second GCS life-cycle rule with Age: 91, Storage Class: Coldline, and Action: Set to Nearline.
C. Create a Cloud Storage lifecycle rule with Age: 90, Storage Class: Standard, and Action: Set to Nearline, and
create a second GCS life-cycle rule with Age: 91, Storage Class: Nearline, and Action: Set to Coldline.
D. Create a Cloud Storage lifecycle rule with Age: 30, Storage Class: Standard, and Action: Set to Coldline, and
create a second GCS life-cycle rule with Age: 365, Storage Class: Nearline, and Action: Delete.
Answer: A
Explanation:
A. Create a Cloud Storage lifecycle rule with Age: 30, Storage Class: Standard, and Action: Set to Coldline,
and create a second GCS life-cycle rule with Age: 365, Storage Class: Coldline, and Action: Delete.
A. Replace the existing data warehouse with BigQuery. Use table partitioning.
B. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.
C. Replace the existing data warehouse with BigQuery. Use federated data sources.
D. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional
Compute Engine preemptible instance with 32 CPUs.
Answer: A
Explanation:
A is the correct answer because the question was asking for a reliable way of improving the data warehouse.
The reliable way is to have a table partitioned and that can be well managed.
https://cloud.google.com/solutions/bigquery-data-warehouse
BigQuery supports partitioning tables by date. You enable partitioning during the table-creation process.
BigQuery creates new date-based partitions automatically, with no need for additional maintenance. In
addition, you can specify an expiration time for data in the partitions.
https://cloud.google.com/solutions/bigquery-data-warehouse#partitioning_tables
You can run queries on data that exists outside of BigQuery by using federated data sources, but this
approach has performance implications. Use federated data sources only if the data must be maintained
externally. You can also use query federation to perform ETL from an external source to BigQuery. This
approach allows you to define ETL using familiar SQL syntax.
https://cloud.google.com/solutions/bigquery-data-warehouse#external_sources
A. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud
Dataflow pipeline.
B. Create a Cloud Function that reads data from BigQuery and cleans it. Trigger the Cloud Function from a
Compute Engine instance.
C. Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result
to a new table.
D. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.
Answer: D
Explanation:
A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery
using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
B. Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-
Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.
C. Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a Multi-Regional Cloud Storage bucket.
Upload this data into BigQuery using gcloud. Use Google Data Studio for analysis and reporting.
D. Use Cloud Dataproc Hive as the data warehouse. Directly stream data into partitioned Hive tables. Use Pig
scripts to analyze data.
Answer: A
Explanation:
A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery
using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
Answer: B
Explanation:
A. Open Buckets
B. Temporary Resources
C. Signed URLs
D. Temporary URLs
Answer: C
Explanation:
Signed URLs provide a way to give time-limited read or write access to anyone in possession of the URL,
regardless of whether they have a Google account
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud
Storage, but you still want to control access using your application-specific logic. The typical way to address
this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that
resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You
specify the expiration time in the query string to be signed.
Reference: https://cloud.google.com/storage/docs/access-control/signed-urls
A. Cloud Datastore
B. Cloud SQL
C. All of the given options
D. Cloud Storage
Answer: A
Explanation:
Google Cloud Datastore is a NoSQL document database built for automatic scaling, high performance, and
ease of application development. Cloud Datastore features include:
Reference: https://cloud.google.com/appengine/docs/python/datastore/
A. Blue Whale
B. LXC
C. BSD Jails
D. Docker
Answer: D
Explanation:
Google Container Engine is a powerful cluster manager and orchestration system for running your Docker
containers.
Reference: https://cloud.google.com/container-engine/
Answer: B
Explanation:
Prior to Cloud IAM, you could only grant Owner, Editor, or Viewer roles to users. A wide range of services and
resources now surface additional IAM roles out of the box. For example, the Cloud
Pub/Sub service exposes Publisher and Subscriber roles in addition to the Owner, Editor, and Viewer roles.
There are two kinds of roles in Cloud IAM:
Primitive roles: The roles historically available in the Google Cloud Platform Console will continue to work.
These are the Owner, Editor, and Viewer roles.
Predefined roles: Predefined roles are the new IAM roles that give finer-grained access control than the
primitive roles. For example, the curated role Publisher provides access to only publish messages to a
Pub/Sub topic.
Reference: https://cloud.google.com/iam/docs/overview
Question: 257 CertyIQ
Which of the follow products will allow you to host a static website?
A. Cloud SDK
B. Cloud Endpoints
C. Cloud Storage
D. Cloud Datastore
Answer: C
Explanation:
Cloud Storage will allow you to host a static website. It provides the means to set an index page, and 404
page. The site can be served up a very fast speeds, and at a low cost.
Reference: https://cloud.google.com/storage/docs/static-website
A. Swarm
B. Kubernetes
C. Docker Orchastrate
D. Mesos
Answer: B
Explanation:
Google Container Engine is a powerful cluster manager and orchestration system for running your Docker
containers. Container Engine schedules your containers into the cluster and manages them automatically
based on requirements you define (such as CPU and memory). It's built on the open source
Kubernetes system, giving you the flexibility to take advantage of on-premises, hybrid, or public cloud
infrastructure.
Reference: https://cloud.google.com/container-engine/
A. Git
B. RCS
C. SVN
D. Mercurial
Answer: A
Explanation:
Google Cloud Source Repositories are fully-featured, private Git repositories hosted on Google Cloud
Platform.
Reference: https://cloud.google.com/source-repositories/docs/
Question: 260 CertyIQ
Which of the following is an analytics data warehouse?
A. Cloud SQL
B. Big Query
C. Datastore
D. Cloud Storage
Answer: B
Explanation:
BigQuery is Google's fully managed, petabyte scale, low cost analytics data warehouse.
BigQuery is serverless, there is no infrastructure to manage and you don't need a database administrator, so
you can focus on analyzing data to find meaningful insights, use familiar SQL, and take advantage of our pay-
as-you-go model. BigQuery is a powerful Big Data analytics platform used by all types of organizations, from
startups to Fortune 500 companies.
Reference: https://cloud.google.com/bigquery/
Answer: D
Explanation:
Google Compute Engine delivers virtual machines running in Google's innovative data centers and worldwide
fiber network. Compute Engine's tooling and workflow support enable scaling from single instances to global,
load-balanced cloud computing.
Compute Engine's VMs boot quickly, come with persistent disk storage, and deliver consistent performance.
Our virtual servers are available in many configurations including predefined sizes or the option to create
Custom Machine Types optimized for your specific needs. Flexible pricing and automatic sustained use
discounts make Compute Engine the leader in price/performance.
Reference: https://cloud.google.com/compute/
Answer: B
Explanation:
At some point in time, you will experience an unexpected single instance failure and reboot. Unlike
unexpected single instance failures, your instance fails and is automatically rebooted by the Google Compute
Engine service. To help mitigate these events, back up your data, use persistent disks, and use startup scripts
to quickly re-configure software.
Reference: https://cloud.google.com/compute/docs/tutorials/robustsystems
A. SAML2
B. JWT
C. Service accounts
D. JSON
Answer: A
Explanation:
SSO is available for G Suite Basic, G Suite Business, and G Suite for Education. It enables users to access all
of their enterprise cloud applicationsincluding administrators signing in to the Admin consoleby signing in one
time for all services. If a user tries to sign in to the Admin console or another Google service when SSO is set
up, they are redirected to the SSO sign-in page.
Google provides a Security Assertion Markup Language (SAML)-based SSO API that you can use to integrate
into your Lightweight Directory Access Protocol
(LDAP), or other SSO system. LDAP is a networking protocol for querying and modifying directory services
running over TCP/IP.
Reference: https://support.google.com/a/answer/60224?hl=en
Answer: A
Explanation:
With Google Cloud Directory Sync (GCDS), you can synchronize the data in your Google domain with your
MicrosoftActive Directory or LDAP server. Your
Google users, groups, and shared contacts are synchronized to match the information in your LDAP server.
The data in your LDAP directory server is never modified or compromised. GCDS is a secure tool that help you
easily keep track of users and groups.
Reference: https://support.google.com/a/answer/106368?hl=en
A. Signed URLs
B. gsutil
C. Single sign-on
D. Temporary Storage Accounts
Answer: A
Explanation:
Signed URLs are a mechanism for query string authentication for buckets and objects. Signed URLs provide a
way to give time-limited read or write access to anyone in possession of the URL, regardless of whether they
have a Google account.
Reference: https://cloud.google.com/storage/docs/access-control/signed-urls
A. A preemptible VM
B. A shared-core VM
C. A high-cpu VM
D. A standard VM
Answer: A
Explanation:
A preemptible VM is an instance that you can create and run at a much lower price than normal instances.
However, Compute Engine might terminate (preempt) these instances if it requires access to those resources
for other tasks. Preemptible instances are excess Compute Engine capacity so their availability varies with
usage.
If your applications are fault-tolerant and can withstand possible instance preemptions, then preemptible
instances can reduce your Compute Engine costs significantly. For example, batch processing jobs can run on
preemptible instances. If some of those instances terminate during processing, the job slows but does not
completely stop. Preemptible instances complete your batch processing tasks without placing additional
workload on your existing instances, and without requiring you to pay full price for additional normal
instances.
Reference: https://cloud.google.com/compute/docs/instances/preemptible
A. A managed instance group combines existing instances of different configurations into one manageable
group
B. A managed instance group uses an instance template to create identical instances
C. A managed instance group creates a firewall around instances
D. A managed instance group is a set of servers used exclusively for batch processing
Answer: B
Explanation:
A managed instance group uses an instance template to create identical instances. You control a managed
instance group as a single entity. If you wanted to make changes to instances that are part of a managed
instance group, you would apply the change to the whole instance group.
Reference: https://cloud.google.com/compute/docs/instance-groups/
A. deny
B. allow, deny & filtered
C. allow
D. allow & deny
Answer: D
Explanation:
You can create firewall rules to allow or deny specific connections based on a combination of IP addresses,
ports, and protocol.
Reference: https://cloud.google.com/compute/docs/networking
Answer: B
Explanation:
. Legacy (non-subnetwork) mode is the original approach for networks, where IP address allocation occurs at
the global network level. This means the network address space spans across all regions. You can still create
a legacy network, but subnetworks are the preferred approach and default behavior going forward.
. Subnet mode is the new form of networks in which your network is subdivided into regional subnetworks.
Each subnetwork controls the IP address range used for instances that are allocated to that subnetwork. The
IP ranges of the different subnetworks in a network might be non-contiguous. There are two options for using
subnetworks:
.Auto subnet network automatically assigns a subnetwork IP prefix range to each region in your network. The
instances created in a zone in a specific region in your network get assigned an IP allocated from the regional
subnetwork range. The default network for a new project is an auto subnet network.
.Custom subnet network allows you to manually define subnetwork prefixes for each region in your network.
There can be zero, one, or several subnetwork prefixes created per region for a network. In order to create an
instance in a zone, you must have previously created at least one subnetwork in that region. At instance
creation time, you will need to specify the subnetwork in the region that the instance IP should be allocated
from.
Reference: https://cloud.google.com/compute/docs/subnetworks
Question: 270 CertyIQ
Which of the following is not a valid metric for triggering autoscaling?
Answer: D
Explanation:
To create an autoscaler, you must specify the autoscaling policy and a target utilization level that the
autoscaler uses to determine when to scale the group. You can choose to scale using the following policies:
. Average CPU utilization
. Stackdriver Monitoring metrics
. HTTP load balancing serving capacity, which can be based on either utilization or requests per second.
. Google Cloud Pub/Sub queuing workload (Alpha)
Reference: https://cloud.google.com/compute/docs/autoscaler/
A. Service accounts
B. Tags
C. Metadata
D. Labels
Answer: B
Explanation:
Assign tags to help you easily apply networking or firewall settings. Tags are used by networks and firewalls
to identify which instances that certain firewall rules apply to. For example, if there are several instances that
perform the same task, such as serving a large website, you can tag these instances with a shared word or
term and then use that tag to give HTTP access to those instances. Tags are also reflected in the metadata
server, so you can use them for applications running on your instances.
Reference: https://cloud.google.com/compute/docs/label-or-tag-resources
A. Point-in-time recovery
B. The AlwaysOn setting
C. Snapshots
D. Failover replicas
Answer: D
Explanation:
When you create a Second Generation instance, you can configure it for high availability; Cloud SQL creates
the failover replica at the same time that it creates the master.
Reference: https://cloud.google.com/sql/docs/configure-ha#test
A. ubuntu
B. The Google provided "gceinstance" user
C. Whatever user you specify in the console
D. root
Answer: D
Explanation:
The instance always executes startup scripts as root, and only executes those scripts after it creates any new
users whose SSH keys are included in the instance metadata.
Reference: https://cloud.google.com/compute/docs/startupscript
A. When an instance shuts down through a request to the guest operating system
B. A preemptible instance being terminated
C. An instances.reset API call
D. Shutting down via the cloud console
Answer: C
Explanation:
Shutdown scripts execute when an instance is scheduled to restart or terminate. There are many ways to
restart or terminate an instance, but only some actions trigger the shutdown script to run. A shutdown script
runs as part of the following actions:
When an instance shuts down due to an instances.delete request or an instances.stoprequest to the API.
When Compute Engine stops a preemptible instance as part of the preemption process.
When an instance shuts down through a request to the guest operating system, such as sudo shutdown or
sudo reboot.
When you shut down an instance manually through the Cloud Platform Console or the gcloud compute tool.
The shutdown script will not run if the instance is reset using instances().reset.
Reference: https://cloud.google.com/compute/docs/shutdownscript
A. Google group
B. Service account
C. Code account
D. Google account
Answer: B
Explanation:
A service account is an account that belongs to your application instead of to an individual end user. When you
run code that is hosted on Cloud Platform, you specify the account that the code should run as. You can
create as many service accounts as needed to represent the different logical components of your application.
Reference: https://cloud.google.com/iam/docs/overview
Answer: A
Explanation:
. Treat each component of your application as a separate trust boundary. If you have multiple services that
requires different permissions, create a separate service account for each of the services so that they can be
permissioned differently.
. Grant primitive roles in the following cases:
.when the Cloud Platform service does not provide a predefined role. See the predefined roles table for a list
of all available predefined roles.
.when you want to grant broader permissions for a project. This often happens when youre granting
permissions in development or test environments.
.when you need to allow a member to modify permissions for a project, youll want to grant them the owner
role because only owners have the permission to grant access to other users for projects.
.when you work in a small team where the team members dont need granular permissions.
. Remember that a policy set on a child resource cannot restrict access granted on its parent.
Check the policy granted on every resource and make sure you understand the hierarchical inheritance.
. Grant roles at the smallest scope needed. For example, if a user only needs access to publish Pub/Sub topic,
grant the Publisher role to the user for that topic.
. Restrict who can act as service accounts. Users who are granted the Service Account Actor role for a service
account can access all the resources for which the service account has access. Therefore be cautious when
granting the Service Account Actor role to a user.
Restrict who has access to create and manage service accounts in your project.
. Granting owner role to a member will allow them to modify the IAM policy. Therefore grant the owner role
only if the member has a legitimate purpose to manage the IAM policy. This is because as your policy contains
sensitive access control data and having a minimal set of users manage it will simplify any auditing that you
may have to do.
Reference: https://cloud.google.com/iam/docs/using-iam-securely
Answer: D
Explanation:
An active/active hybrid production environment (on-premises and GCP) can continue running in the event that
either the on-premises environment or the GCP deployment fails, so its recovery time would be zero. A warm
standby server requires a manual DNS adjustment, so it will always take some time to recover.
Making it easier to do the DNS adjustment will reduce the recovery time for the warm standby model, though.
A hot standby server automatically fails over in the event that the main instance becomes unhealthy, so it has
a lower recovery time than a warm standby server, which requires a manual failover.
Typically, the smaller your RTO (Recovery Time Objective) is, the more preconfigured you will want your image
to be.
Reference: https://cloud.google.com/solutions/disaster-recovery-cookbook
Answer: A
Explanation:
These are all best practices for mitigating Denial of Service attacks:
Reduce the attack surface for your GCE deployment
A. Local SSD
B. Standard Persistent Disk
C. SSD Persistent Disk
D. RAM disk
Answer: C
Explanation:
Local SSDs and RAM disks disappear when you stop an instance. Standard Persistent Disks and SSD
Persistent Disks both survive when you stop an instance, but SSD Persistent Disks have up to 4 times the
throughput and up to 40 times the I/O operations per second of a Standard Persistent Disk.
Reference: https://cloud.google.com/compute/docs/disks/
A. You can migrate your existing Microsoft application licenses to Compute Engine instances, but not your
Microsoft Windows licenses.
B. You can migrate your existing Microsoft Windows and Microsoft application licenses to Compute Engine
instances.
C. You cannot migrate your existing Microsoft Windows or Microsoft application licenses to Compute Engine
instances.
D. You can migrate your existing Microsoft Windows licenses to Compute Engine instances, but not your
Microsoft application licenses.
Answer: B
Explanation:
Answer is B.
You can bring your existing Windows Server licenses to Compute Engine using Bring your own license with
sole-tenant nodes or bring your existing Microsoft application licenses to your Windows Server instances to
run specific applications. However, you must continue to manage those licenses yourself.
Reference:
https://cloud.google.com/compute/docs/instances/windows/
Answer: B
Explanation:
Cloud SQL is a managed service for MySQL and PostgreSQL, which both support SQL queries. Cloud Spanner
supports SQL queries. Cloud Bigtable and Cloud
Datastore are NoSQL databases.
Reference: https://cloud.google.com/products/storage/
Question: 282 CertyIQ
Which statement about IP addresses is false?
A. You are charged for a static external IP address for every hour it is in use.
B. You are not charged for ephemeral IP addresses.
C. Google Cloud Engine supports only IPv4 addresses, not IPv6.
D. You are charged for a static external IP address when it is assigned but unused.
Answer: B
Explanation:
Answer is B
https://cloud.google.com/compute/all-pricing#ipaddress
Type Price/Hours(USD)
*Promotion: Charges between January 1st, 2020 and March 31st, 2020 are waived.
*Promotion: Charges between January 1st, 2020 and March 31st, 2020 are waived.
A. Container Engine
B. Cloud Engine
C. App Engine
D. Docker containers running on Cloud Engine
Answer: C
Explanation:
App Engine is great for running web-based apps, line of business apps, and mobile backends. Compute Engine
is great for when you need more control of the underlying infrastructure.
Container Engine is in between because it gives you control of the containers running on top of Compute
Engine.
Reference:
https://cloud.google.com/compute/docs/faq#how_do_google_app_engine_and_product_name_relate_to_eac
h_other
Question: 284 CertyIQ
To ensure that your application will handle the load even if an entire zone fails, what should you do?
A. Don't select the "Multizone" option when creating your managed instance group.
B. Spread your managed instance group over two zones and overprovision by 100%.
C. Create a regional unmanaged instance group and spread your instances across multiple zones.
D. Overprovision your regional managed instance group by at least 50%.
Answer: D
Explanation:
To account for the extreme case where one zone fails or an entire group of instances stops responding,
Compute Engine strongly recommends overprovisioning your managed instance group by at least 50%.
Spreading instances across three zones already helps you preserve at least 2/3 of your serving capacity and
the other two zones in the region can continue to serve traffic without interruption. By overprovisioning to
150%, you can ensure that if 1/3 of the capacity is lost, 100% of traffic is supported by the remaining zones.
You need to select the "Multizone" option (or the --region flag if you're using the gcloud command) when
creating a managed instance group.
It is only possible to create regional managed instance groups. You cannot create regional unmanaged
instance groups.
Reference:
https://cloud.google.com/compute/docs/instance-groups/distributing-instances-with-regional-instance- grou
ps#provisioning_the_correct_managed_instance_group_size
A. Bob will be able to access all of the objects inside the bucket because he was granted access to at least one
object in the bucket.
B. Bob will be able to access the object because bucket and object ACLs are independent of each other.
C. Bob will not be able to access the object because he does not have access to the bucket.
D. It is not possible to grant access to an object when it is inside a bucket for which a user does not have access.
Answer: B
Explanation:
Bucket and object ACLs are independent of each other, which means that the ACLs on a bucket do not affect
the ACLs on objects inside that bucket. It is possible for a user without permissions for a bucket to have
permissions for an object inside the bucket. For example, you can create a bucket such that only GroupA is
granted permission to list the objects in the bucket, but then upload an object into that bucket that allows
GroupB READ access to the object. GroupB will be able to read the object, but will not be able to view the
contents of the bucket or perform bucket-related tasks.
Reference: https://cloud.google.com/storage/docs/best-practices#security
Answer: B
Explanation:
VPC networks allow you to regionally segment the network IP space into prefixes (subnets) and control which
prefix a VM instance's internal IP address is allocated from. If you want to avoid statically managing these
subnets including the burden of adding and removing related static routes for your VPN, you can do so by
enabling dynamic routing for your VPNs using Cloud Router.
The diagram at https://cloud.google.com/compute/images/cloudrouter/cr-w-subnets.svg shows a VPN
Gateway, a Peer Gateway, and a Cloud Router.
Reference:
https://cloud.google.com/compute/docs/cloudrouter#cloud_router_for_vpns_with_vpc_networks
Answer: C
Explanation:
There are 3 ways to manage your own encryption keys when using Google :
. Customer-managed encryption keys (CMEK) using Cloud KMS allow you to manage your own keys that are
hosted on GCP.
. Customer-supplied encryption keys (CSEK) allow you to manage your own keys on premise, but still use
them on GCP.
. With client-side encryption, you encrypt the data before you send it to GCP.
Google Cloud Platform encrypts customer data stored at rest by default, with no additional action required
from you.
Data in Google Cloud Platform is broken into subfile chunks for storage, and each chunk is encrypted at the
storage level with an individual encryption key. The key used to encrypt the data in a chunk is called a data
encryption key (DEK). Because of the high volume of keys at Google, and the need for low latency and high
availability, these keys are stored near the data that they encrypt. The DEKs are encrypted with (or "wrapped"
by) a key encryption key (KEK).
Customers can choose which key management solution they prefer for managing the KEKs that protect the
DEKs that protect their data.
Reference: https://cloud.google.com/security/encryption-at-rest/
Question: 288 CertyIQ
Which database service requires that you configure a failover replica to make it highly available?
A. Cloud Spanner
B. Cloud SQL
C. BigQuery
D. Cloud Datastore
Answer: B
Explanation:
Cloud Datastore, Cloud Spanner, and BigQuery are all horizontally scalable and are automatically replicated
to multiple zones. Since Cloud SQL is not horizontally scalable, you must configure a failover replica to make
it highly available.
Reference: https://cloud.google.com/sql/docs/mysql/configure-ha
Answer: C
Explanation:
Predefined roles provide more granular access than the primitive roles. Grant predefined roles to identities
when possible, so you only give the least amount of access necessary to access your resources.
Reference: https://cloud.google.com/iam/docs/using-iam-securely
Answer: C
Explanation:
Do not embed secrets related to authentication in source code, such as API keys, OAuth tokens, and service
account credentials.
Authenticating applications using service account credentials
Client libraries can use Application Default Credentials to authenticate with Google APIs and send requests to
those APIs.
For some applications, you might need to request an OAuth2 access token and use it directly without going
through a client library or using the gcloud or gsutil tools.
Some applications might use commands from the gcloud and gsutil tools, which are included by default in
most Compute Engine images. These tools automatically recognize an instance's service account and relevant
permissions granted to the service account.
Reference: https://cloud.google.com/docs/authentication#token_lifecycle_management
Answer: D
Explanation:
Google uses software-defined networking that enables you to subject every packet to security checks,
thereby enabling complete isolation of Cloud Platform projects.
Networks within projects are used to isolate groups of VM instances.
Subnetworks on Compute Engine enable you to control the address space in which VM instances are created,
while maintaining the ability to route between them.
Firewall rules only restrict incoming network traffic. They cannot restrict outgoing network traffic.
Reference:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#use_projects_to_fully
_isolate_resources
A. Create a snapshot of the disk and use it to create a new disk; then attach the new disk to a new instance
B. Use netcat to try to connect to port 22
C. Access the serial console output
D. Create a startup script to collect information.
Answer: ABC
Explanation:
If you modify your firewall rules, it could prevent traffic from flowing to your VM.
You can't attach the VM instance's disk to a new instance without detaching it from the existing instance first,
which would require shutting down the instance.
If you create a startup script to collect information, you will have to restart your VM instance to run the script.
You can view the serial console output without disrupting traffic. Running commands in the interactive serial
console, on the other hand, could disrupt operations.
Connecting to port 22 using netcat will only affect the SSH server on the VM, since that is the only service
that listens on port 22.
You can create a snapshot of a VM instance's disk without disrupting the VM's operation.
Reference:
https://cloud.google.com/compute/docs/troubleshooting#ssherrors
Answer: BD
Explanation:
Uptime checks verify that your web server is always accessible. The alerting policy controls who is notified if
the uptime checks should fail.
You don't need to install the Stackdriver Monitoring Agent to get downtime alerts. The agent provides
additional information, but it's not required. The Stackdriver
Logging Agent is for additional logging, not for alerts.
Using the Monitoring agent is optional. Stackdriver Monitoring can access some metrics without the
Monitoring agent, including CPU utilization, some disk traffic metrics, network traffic, and uptime information.
[https://cloud.google.com/monitoring/agent/#purpose]
Reference: https://cloud.google.com/monitoring/quickstart-lamp#gs-checks
Answer: AD
Explanation:
Cloud Storage Transfer Service transfers data from an online data source to a data sink. Your data source can
be an Amazon Simple Storage Service (Amazon
S3) bucket, an HTTP/HTTPS location, or a Google Cloud Storage bucket. Your data sink (the destination) is
always a Google
Cloud Storage bucket.
You can use Cloud Storage Transfer Service to:
Back up data to a Google Cloud Storage bucket from other storage providers.
Move data from a Multi-Regional Storage bucket to a Nearline Storage bucket to lower your storage costs.
Reference: https://cloud.google.com/storage/transfer/
Question: 295 CertyIQ
What are two of the actions you can take to troubleshoot a virtual machine instance that won't start up at all?
(Select 2 answers.)
A. Increase the CPU and memory on the instance by changing the machine type.
B. Validate that your disk has a valid file system.
C. Examine your virtual machine instance's serial port output.
D. Connect to your virtual machine instance using SSH.
Answer: BC
Explanation:
Here are some tips to help troubleshoot your persistent boot disk if it doesn't boot.
Examine your virtual machine instance's serial port output.
An instance's BIOS, bootloader, and kernel will print their debug messages into the instance's serial port
output, providing valuable information about any errors or issues that the instance experienced.
Enable interactive access to the serial console.
You can enable interactive access to an instance's serial console so you can log in and debug boot issues from
within the instance, without requiring your instance to be fully booted.
Validate that your disk has a valid file system.
If your file system is corrupted or otherwise invalid, you won't be able to launch your instance.
Validate that the disk has a valid master boot record (MBR).
If your virtual machine won't boot at all, then you can't use SSH to connect to it because the SSH server on
the VM won't be running.
Increasing the CPU and memory on the instance might help if the VM boots partway, but not if it can't boot at
all.
Reference: https://cloud.google.com/compute/docs/troubleshooting#pdboot
A. You should test at the maximum load that you expect to encounter.
B. You should test at 50% more than the maximum load that you expect to encounter.
C. It is not necessary to test sudden increases in traffic since GCP scales seamlessly.
D. Your load tests should include testing sudden increases in traffic.
Answer: AD
Explanation:
Your tests should be designed to simulate real world traffic as closely as possible. You should test at the
maximum load that you expect to encounter.
In addition, some applications will get a sudden increase in load, and you will need to predict the rate of
increase. If you are expecting spikey load, you should also test how your application performs when traffic
suddenly increases.
Although GCP services scale quickly, they do not scale instantaneously, which is why you should test sudden
increases in traffic.
Reference: https://cloud.google.com/appengine/articles/scalability#loadtesting
A. In a resilience test, your application should keep running with little or no downtime.
B. To test the resilience of an autoscaling instance group, you can terminate a random instance within that
group.
C. In order for an application to survive instance failures, it should not be stateless.
D. Resilience testing is the same as disaster recovery testing.
Answer: AB
Explanation:
Resilience testing is similar to disaster recovery testing because youre testing what happens when
infrastructure fails, but the difference is that in resilience testing, youre expecting your application to keep
running, with little or no downtime. With disaster recovery testing, some downtime is expected.
One common testing scenario is to terminate a random instance within an autoscaling instance group. Netflix
created software called Chaos Monkey that automates this sort of testing. If your application in the
autoscaling instance group is stateless, then it should be able to survive this sort of failure without any
noticeable impact on users.
Reference: https://cloudacademy.com/google/managing-your-google-cloud-infrastructure-
course/testing.html
Answer: C
Explanation:
Stackdriver Error Reporting keeps track of errors in your applications and can be configured to alert you when
an error occurs.
Stackdriver Debugger lets you inspect the state of an application at any code location. If you click on an error
displayed in Error Reporting, it will put you into the associated application's source code in the Debugger so
you can diagnose the problem.
Stackdriver Monitoring gives real-time updates on performance metrics and uptime, not application errors.
There is no service called Stackdriver Alerts, although alerting is a capability of Stackdriver Monitoring.
Stackdriver Trace collects latency data from your applications. It is useful for locating performance
bottlenecks, not application errors.
Reference: https://cloud.google.com/products/
Answer: C
Explanation:
When you create a sink, Stackdriver Logging creates a new service account for the sink, called a unique writer
identity.
In order to write logs to a BigQuery dataset, you must grant the sink's writer identity either Can edit
permission or the Writer role.
It is not necessary to create a firewall rule to allow traffic between Stackdriver and BigQuery.
The Cloud Data Transfer Service is for importing data to Google Cloud Platform from an external source.
BigQuery can easily handle any volume of Stackdriver logs.
Reference:
https://cloud.google.com/logging/docs/export/configure_export_v2#errors_exporting_to_bigquery
Answer: C
Explanation:
C is correct We’re pleased to announce that you can now join our new offering for Blue Medora. If you’re using
Stackdriver to monitor your Google Cloud Platform (GCP) or Amazon Web Services (AWS) resources, you can
now extend your observability to on-prem infrastructure, Microsoft Azure, databases, hardware devices and
more. The recently released BindPlane integration from Blue Medora lets you consolidate all your signals into
Stackdriver, GCP’s monitoring tool.
https://cloud.google.com/blog/products/management-tools/extending-stackdriver-to-on-prem-with-the-
newbindplane-integration
A. Restrict usage of the owner role for projects and log buckets.
B. Require two people to inspect the logs.
C. Implement object versioning on the log-buckets.
D. Encrypt the logs using Cloud KMS.
Answer: ACD
Explanation:
ACD for me. Seems B is only monitoring the logs but not restricting it. D can be correct because we can
encrypt it with KMS and provide access to the KMS key with certain Predefined roles like
roles/cloudkms.cryptoKeyDecrypter and roles/cloudkms.cryptoKeyEncrypter only to authorized members or
service account
Answer: D
Explanation:
Google Compute Engine (GCE) only allows network traffic that is explicitly permitted by your project's
firewall rules to reach your instance. By default, all projects automatically come with a default network that
allows certain kinds of connections. If you delete one of the default network firewall rules, then the
associated traffic will no longer be allowed.
Dropped traffic can be caused by the TCP keep-alive setting being too long, not by being too short.
All GCE instances have high-bandwidth connections.
Reference: https://cloud.google.com/compute/docs/troubleshooting#networktraffic
A. Penetration tests
B. Integrating static code analysis tools into your CI/CD pipeline
C. Encrypting your source code
D. Peer review of code
Answer: ABD
Explanation:
There are four basic techniques for analyzing the security of a software application - automated scanning,
manual penetration testing, static analysis, and manual code review.
Despite the many claims that code review is too expensive or time consuming, there is no question that it is
the fastest and most accurate way to find and diagnose many security problems. There are also dozens of
serious security problems that simply can't be found any other way.
Encrypting your source code might help with keeping it out of the hands of hackers, but it won't help you
develop more secure software.
Reference: https://www.owasp.org/images/2/2e/OWASP_Code_Review_Guide-V1_1.pdf
Answer: BC
Explanation:
If a Delete action is specified for a bucket with the Age condition (and no NumberOfNewerVersions condition),
then some objects may be tagged with expiration time metadata. An object's expiration time indicates the
time at which the object becomes (or became) eligible for deletion by Object Lifecycle Management. The
expiration time may change as the bucket's lifecycle configuration changes.
To find out what lifecycle management actions have been taken, you can enable access logs for your bucket.
A value of "GCS Lifecycle Management" in the
"cs_user_agent" field in the log entry indicates the action was taken by Google Cloud Storage based on the
lifecycle configuration.
A lifecycle config file is used to configure a lifecycle policy, but it does not contain information about how that
policy has affected specific objects.
There is no service called "Cloud Storage Lifecycle Monitoring".
Reference: https://cloud.google.com/storage/docs/lifecycle#expirationtime
A. Archive objects older than 30 days (the second rule doesn't do anything)
B. Delete objects older than 30 days (the second rule doesn't do anything)
C. Archive objects older than 30 days and move objects to Coldline Storage after 365 days
D. Delete objects older than 30 days and move objects to Coldline Storage after 365 days
Answer: C
Explanation:
If object versioning is enabled and the Delete rule has an "isLive:true" condition, then objects will be archived
rather than deleted. If object versioning is disabled, then the first rule would actually delete objects after 30
days and the second rule would never match any objects. The question says that object versioning is enabled,
so that's not the case.
The second rule moves objects older than 365 days from Multi-Regional Storage to Coldline Storage. Since
all live objects are archived after 30 days, only archived objects will be old enough to be moved by this rule.
Also, since this is a Multi-Regional bucket, this rule will match all archived objects older than 365 days (other
than those that have already been moved to Coldline Storage).
Reference: https://cloud.google.com/storage/docs/managing-lifecycles
A. Stackdriver Trace tracks the performance of the virtual machines running the application.
B. Stackdriver Trace tracks the latency of incoming requests.
C. Applications in App Engine automatically submit traces to Stackdriver Trace. Applications outside of App
Engine need to use the Trace SDK or Trace API.
D. To make an application work with Stackdriver Trace, you need to add instrumentation code using the Trace
SDK or Trace API, even if the application is in App
Answer: BC
Explanation:
By default, Stackdriver Trace collects data from any Google App Engine application where the feature is
enabled. For other applications, use the Stackdriver
Trace API [either directly or through the Trace SDK] to send latency data to Stackdriver Trace.
[https://cloud.google.com/trace/docs/reference]
Stackdriver Trace helps you understand how long it takes your application to handle incoming requests from
users or other applications, and how long it takes to complete operations like RPC calls performed when
handling the requests. [https://cloud.google.com/trace/docs/overview]
Reference: https://cloud.google.com/trace/docs/reference
A. Create a token and pass it in as an environment variable to func_display. When invoking func_query, include
the token in the request. Pass the same token to func_query and reject the invocation if the tokens are
different.
B. Make func_query 'Require authentication.' Create a unique service account and associate it to func_display.
Grant the service account invoker role for func_query. Create an id token in func_display and include the token
to the request when invoking func_query.
C. Make func_query 'Require authentication' and only accept internal traffic. Create those two functions in the
same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display.
D. Create those two functions in the same project and VPC. Make func_query only accept internal traffic.
Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both
functions use the same service account.
Answer: B
Explanation:
B Authentication function to function calls. Add calling function service account as a member on the receiving
function and grant that member the cloud functions invoker
https://cloud.google.com/functions/docs/securing/authenticating
Question: 308 CertyIQ
For this question, refer to the TerramEarth case study. You have broken down a legacy monolithic application into a
few containerized RESTful microservices.
You want to run those microservices on Cloud Run. You also want to make sure the services are highly available
with low latency to your customers. What should you do?
A. Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services.
Create a global HTTP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.
B. Deploy Cloud Run services to multiple regions. Create serverless network endpoint groups pointing to the
services. Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing
instance.
C. Deploy Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points
to the services.
D. Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud
Run Endpoints to its backend service.
Answer: B
Explanation:
A. Open a support case regarding the CVE and chat with the support engineer.
B. Read the CVEs from the Google Cloud Status Dashboard to understand the impact.
C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact.
D. Post a question regarding the CVE in Stack Overflow to get an explanation.
E. Post a question regarding the CVE in a Google Cloud discussion group to get an explanation.
Answer: AC
Explanation:
A. Open a support case regarding the CVE and chat with the support engineer.
C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact.
Answer: C
Explanation:
Cloud monitoring for Uptime check to validate the application URL and leverage pub/sub to trigger Cloud
Function to switch URL
https://cloud.google.com/monitoring/uptime-checks?hl=en
A. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build container images for
each microservice, and tag them using the code commit hash. Push the images to the Container Registry.
B. Configure a trigger in Cloud Build for new source changes. The trigger invokes build jobs and build container
images for the microservices. Tag the images with a version number, and push them to Cloud Storage.
C. Create a Scheduler job to check the repo every minute. For any new change, invoke Cloud Build to build
container images for the microservices. Tag the images using the current timestamp, and push them to the
Container Registry.
D. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build one container image,
and tag the image with the label 'latest.' Push the image to the Container Registry.
Answer: A
Explanation:
Google Cloud has two services for storing and managing container images such as Artifact Registry and
Container Registry.
https://cloud.google.com/container-registry/docs/overview
A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to
Google Cloud.
B. Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud
Storage.
C. Make sure there are no other users consuming the 1Gbps link, and use multi-thread transfer to upload the
data to Cloud Storage.
D. Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data
to Cloud Storage.
Answer: A
Explanation:
Answer is A.
https://cloud.google.com/transfer-appliance/docs/4.0/overview#location-availability
With a typical network bandwidth of 100 Mbps, one petabyte of data takes about 3 years to upload. However,
with Transfer Appliance, you can receive the appliance and capture a petabyte of data in under 25 days. Your
data can be accessed in Cloud Storage within another 25 days, all without consuming any outbound network
bandwidth.
Company Background -
Dress4Win's application has grown from a few servers in the founder's garage to several hundred servers and
appliances in a collocated data center. However, the capacity of their infrastructure is now insufficient for the
application's rapid growth. Because of this growth and the company's desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.
Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test
environments. They are also considering building a disaster recovery site, because their current infrastructure is at
a single location. They are not sure which components of their architecture they can migrate as is and which
components they need to change before migrating them.
Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best
practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.
Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.
CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are
also concerned that a new competitor could use a public cloud platform to offset their up-front investment and
freeing them to focus on developing better features.
CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its
useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our
traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is
sitting idle.
CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost
of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current
model.QuestionThe Dress4Win security team has disabled external SSH access into production virtual machines
(VMs) on Google Cloud Platform (GCP).
The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google
Cloud Storage objects.
What can they do?
Answer: A
Explanation:
Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test
environments. They are also considering building a disaster recovery site, because their current infrastructure is at
a single location. They are not sure which components of their architecture they can migrate as is and which
components they need to change before migrating them.
Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best
practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.
Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.
CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are
also concerned that a new competitor could use a public cloud platform to offset their up-front investment and
freeing them to focus on developing better features.
CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its
useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our
traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is
sitting idle.
CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost
of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current
model.QuestionAt Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive
copies of database backup files.
The database files are compressed tar files stored in their current data center.
How should he proceed?
A.Create a cron script using gsutil to copy the files to a Coldline Storage bucket.
B.Create a cron script using gsutil to copy the files to a Regional Storage bucket.
C.Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.
D.Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.
Answer: C
Explanation:
Company Background -
Dress4Win's application has grown from a few servers in the founder's garage to several hundred servers and
appliances in a collocated data center. However, the capacity of their infrastructure is now insufficient for the
application's rapid growth. Because of this growth and the company's desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.
Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test
environments. They are also considering building a disaster recovery site, because their current infrastructure is at
a single location. They are not sure which components of their architecture they can migrate as is and which
components they need to change before migrating them.
Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.
CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are
also concerned that a new competitor could use a public cloud platform to offset their up-front investment and
freeing them to focus on developing better features.
CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its
useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our
traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is
sitting idle.
CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost
of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current
model.QuestionAs part of their new application experience, Dress4Wm allows customers to upload images of
themselves.
The customer has exclusive control over who may view these images.
Customers should be able to upload images with minimal latency and also be shown their images quickly on the
main application page when they log in.
Which configuration should Dress4Win use?
A.Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that
maps each customer's ID and their image files.
B.Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud
Storage that contains the customer's unique ID.
C.Use a distributed file system to store customers' images. As storage needs increase, add more persistent
disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring privacy
of images.
D.Use a distributed file system to store customers' images. As storage needs increase, add more persistent
disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to
their image files.
Answer: A
Explanation:
A is correct. The whole idea is simply build and maintain an external metadata service using NoSQL database
to associate the GS object key with its metadata, in order to facilitate object findings based on attributes you
pre defined in metatdataThis AWS blog provides a solution in the context of AWS S3, but the idea behind is
applicable to Google Storage as well
https://aws.amazon.com/blogs/big-data/building-and-maintaining-an-amazon-s3-metadata-index-without-
servers/
Question: 316 CertyIQ
Introductory InfoCompany Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a
website and mobile application. The company also cultivates an active social network that connects their users
with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a
premium app model.
Company Background -
Dress4Win's application has grown from a few servers in the founder's garage to several hundred servers and
appliances in a collocated data center. However, the capacity of their infrastructure is now insufficient for the
application's rapid growth. Because of this growth and the company's desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.
Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test
environments. They are also considering building a disaster recovery site, because their current infrastructure is at
a single location. They are not sure which components of their architecture they can migrate as is and which
components they need to change before migrating them.
Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best
practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.
Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.
CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are
also concerned that a new competitor could use a public cloud platform to offset their up-front investment and
freeing them to focus on developing better features.
CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its
useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our
traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is
sitting idle.
CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost
of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current
model.QuestionDress4Win has end-to-end tests covering 100% of their endpoints.
They want to ensure that the move to the cloud does not introduce any new bugs.
Which additional testing methods should the developers employ to prevent an outage?
A.They should enable Google Stackdriver Debugger on the application code to show errors in the code.
B.They should add additional unit tests and production scale load tests on their cloud staging environment.
C.They should run the end-to-end tests in the cloud staging environment to determine if the code is working as
intended.
D.They should add canary tests so developers can measure how much of an impact the new release causes to
latency.
Answer: B
Explanation:
They should add additional unit tests and production scale load tests on their cloud staging environment.
Company Background -
Dress4Win's application has grown from a few servers in the founder's garage to several hundred servers and
appliances in a collocated data center. However, the capacity of their infrastructure is now insufficient for the
application's rapid growth. Because of this growth and the company's desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.
Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test
environments. They are also considering building a disaster recovery site, because their current infrastructure is at
a single location. They are not sure which components of their architecture they can migrate as is and which
components they need to change before migrating them.
Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best
practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.
Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.
CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are
also concerned that a new competitor could use a public cloud platform to offset their up-front investment and
freeing them to focus on developing better features.
CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its
useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our
traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is
sitting idle.
CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost
of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current
model.QuestionYou want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by
auditors for at least 10 years.
Cost optimization is your top priority.
Which cloud services should you choose?
A.Google Cloud Storage Coldline to store the data, and gsutil to access the data.
B.Google Cloud Storage Nearline to store the data, and gsutil to access the data.
C.Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.
D.BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google
Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance
group to access the data.
Answer: A
Explanation:
A, because when you read documentation both of them (nearline and coldline) you can see the expresion
infrecuent access. And in this case, your priority is the cost, and you are going to sabe 10 years
Reference:
https://cloud.google.com/storage/docs/storage-classes
Question: 318 CertyIQ
Introductory InfoCompany Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a
website and mobile application. The company also cultivates an active social network that connects their users
with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a
premium app model.
Company Background -
Dress4Win's application has grown from a few servers in the founder's garage to several hundred servers and
appliances in a collocated data center. However, the capacity of their infrastructure is now insufficient for the
application's rapid growth. Because of this growth and the company's desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.
Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test
environments. They are also considering building a disaster recovery site, because their current infrastructure is at
a single location. They are not sure which components of their architecture they can migrate as is and which
components they need to change before migrating them.
Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best
practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.
Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.
CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are
also concerned that a new competitor could use a public cloud platform to offset their up-front investment and
freeing them to focus on developing better features.
CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its
useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our
traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is
sitting idle.
CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost
of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current
model.QuestionThe current Dress4Win system architecture has high latency to some customers because it is
located in one data center.
As of a future evaluation and optimizing for performance in the cloud, Dresss4Win wants to distribute its system
architecture to multiple locations when Google cloud platform.
Which approach should they use?
A.Use regional managed instance groups and a global load balancer to increase performance because the
regional managed instance group can grow instances in each region separately based on traffic.
B.Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual
machines managed by your operations team.
C.Use regional managed instance groups and a global load balancer to increase reliability by providing
automatic failover between zones in different regions.
D.Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual
machines as part of a separate managed instance groups.
Answer: A
Explanation:
Use regional managed instance groups and a global load balancer to increase performance because the
regional managed instance group can grow instances in each region separately based on traffic.
Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments.
They are also building a disaster recovery site, because their current infrastructure is at a single location. They are
not sure which components of their architecture they can migrate as is and which components they need to
change before migrating them.
Java -
- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUs
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:
- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN `" MySQL databases
- 1 PB total storage; 400 TB available
NAS `" image storage, logs, backups
- 100 TB total storage; 35 TB available
Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best
practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Technical Requirements -
Easily create non-production environments in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.
Executive Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are
also concerned that a competitor could use a public cloud platform to offset their up-front investment and free
them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend
evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an
initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost
of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between
30% and 50% over our current model.QuestionFor this question, refer to the Dress4Win case study. Dress4Win is
expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the
existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within
the next 6 months. How will you configure the solution to scale for this growth without making major application
changes and still maximize the ROI?
A.Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage.
Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.
B.Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk
storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.
C.Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to
Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.
D.Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to
Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.
Answer: D
Explanation:
Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to
Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.
Thank you
Thank you for being so interested in the premium exam material.
I'm glad to hear that you found it informative and helpful.
If you have any feedback or thoughts on the bumps, I would love to hear them.
Your insights can help me improve our writing and better understand our readers.
Best of Luck
You have worked hard to get to this point, and you are well-prepared for the exam
Keep your head up, stay positive, and go show that exam what you're made of!