Free GCP-PCA Sample Questions — Google Cloud Platform - Professional Cloud Architect

Free GCP-PCA sample questions for the Google Cloud Platform - Professional Cloud Architect exam. No account required: study at your own pace.

Want an interactive quiz? Take the full GCP-PCA practice test

Looking for more? Click here to get the full PDF with 304+ practice questions for $10 for offline study and deeper preparation.

Question 1

Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data. The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team. The development team does not want the networking team to have access to the sensitive data on the instances. What should you do?

  • A. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use Cloud VPN to join the two VPCs
  • B. 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team
  • C. 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team
  • D. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use VPC Peering to join the two VPCs
Show Answer
Correct Answer:
C. 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team
Question 2

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?

  • A. Configure Workload Identity and service accounts to be used by the application platform
  • B. Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the application platform
  • C. Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used by the application platform
  • D. Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used by the application platform
Show Answer
Correct Answer:
A. Configure Workload Identity and service accounts to be used by the application platform
Question 3

You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and end-to-end in- transit encryption based on Google best practices. What should you do?

  • A. Create a cross-region load balancer with URL Maps
  • B. Create an HTTPS load balancer with URL Maps
  • C. Create appropriate instance groups and instances. Configure SSL proxy load balancing
  • D. Create a global forwarding rule. Configure SSL proxy load balancing
Show Answer
Correct Answer:
B. Create an HTTPS load balancer with URL Maps
Question 4

For this question, refer to the TerramEarth case study. You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices. What should you do?

  • A. Create a token and pass it in as an environment variable to func_display. When invoking func_query, include the token in the request. Pass the same token to func_query and reject the invocation if the tokens are different
  • B. Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query
  • C. Make func_query 'Require authentication' and only accept internal traffic. Create those two functions in the same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display
  • D. Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both functions use the same service account
Show Answer
Correct Answer:
B. Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query
Question 5

Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables. You want analysts from each country to be able to see and query only the data for their respective countries. How should you configure the access rights?

  • A. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group
  • B. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group
  • C. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country- group
  • D. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group
Show Answer
Correct Answer:
A. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group
Question 6

Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?

  • A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock
  • B. 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month
  • C. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days
  • D. 1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set a time_partitioning_expiration of 30 days
Show Answer
Correct Answer:
A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock
Question 7

For this question, refer to the TerramEarth case study. You are building a microservice-based application for TerramEarth. The application is based on Docker containers. You want to follow Google-recommended practices to build the application continuously and store the build artifacts. What should you do?

  • A. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. Push the images to the Container Registry
  • B. Configure a trigger in Cloud Build for new source changes. The trigger invokes build jobs and build container images for the microservices. Tag the images with a version number, and push them to Cloud Storage
  • C. Create a Scheduler job to check the repo every minute. For any new change, invoke Cloud Build to build container images for the microservices. Tag the images using the current timestamp, and push them to the Container Registry
  • D. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build one container image, and tag the image with the label 'latest.' Push the image to the Container Registry
Show Answer
Correct Answer:
A. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. Push the images to the Container Registry
Question 8

You are using a GitHub repository for your application’s source code. You want to set up an efficient and secure continuous deployment process to automatically build and deploy the application to Cloud Run whenever a pull request is merged. What should you do?

  • A. Create a GitHub webhook trigger in Cloud Build. Once a pull request is merged, trigger Cloud Build to build a container image and save it in Artifact Registry. Use Config Sync to deploy the application to Cloud Run
  • B. Create a workflow using GitHub Actions to build and deploy the application to Cloud Run once a pull request is merged. The workflow will use a service account key checked in with your source code for deployment permission
  • C. Create a GitHub Enterprise trigger in Cloud Build. Once a pull request is merged, trigger Cloud Build to build and deploy the application to Cloud Run. Save the deployment credential to Secret Manager
  • D. Connect your repository using the Cloud Build GitHub app. Create a trigger in Cloud Build. Once a pull request is merged, trigger Cloud Build to build and deploy the application to Cloud Run
Show Answer
Correct Answer:
D. Connect your repository using the Cloud Build GitHub app. Create a trigger in Cloud Build. Once a pull request is merged, trigger Cloud Build to build and deploy the application to Cloud Run
Question 9

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction accuracy from their ML prediction models. They want you to use Google's AI Platform so HRL can understand and interpret the predictions. What should you do?

  • A. Use Explainable AI
  • B. Use Vision AI
  • C. Use Google Cloud's operations suite
  • D. Use Jupyter Notebooks
Show Answer
Correct Answer:
A. Use Explainable AI
Question 10

Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that is running multiple microservices. The cluster has both Anthos Service Mesh and Anthos Config Management configured. End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay. What should you do?

  • A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices
  • B. Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads
  • C. Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace. On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads
  • D. Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry between the microservices in the Cloud Console
Show Answer
Correct Answer:
A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices
Question 11

Your product team is building a critical, customer-facing application on Google Cloud. The development team wants to use Spanner for their database to take advantage of its horizontal scalability and low operational overhead However, the FinOps team is concerned about the direct monthly cost of Spanner and proposed using a self-managed PostgreSQL database on Compute Engine VMs instead. You need to resolve this conflict and ensure the project moves forward with an architecturally sound database choice that balances technical requirements with financial constraints. What should you do?

  • A. Provide the development team with a reference architecture for deploying a highly available PostgreSQL cluster on a regional managed instance group (MIG)
  • B. Suggest using Cloud SQL for PostgreSQL as a compromise to get a managed service at a lower cost than Spanner
  • C. Develop a total cost of ownership (TCO) analysis that includes operational overhead, and present it in a workshop to facilitate a decision
  • D. Cite the reliability and performance optimization pillars of the Google Cloud Well-Architected Framework to formally justify the use of Spanner
Show Answer
Correct Answer:
C. Develop a total cost of ownership (TCO) analysis that includes operational overhead, and present it in a workshop to facilitate a decision
Question 12

You are deploying a PHP App Engine Standard service with Cloud SQL as the backend. You want to minimize the number of queries to the database. What should you do?

  • A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL
  • B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the cache with keys containing query results
  • C. Set the memcache service level to shared. Create a cron task that runs every minute to save all expected queries to a key called "cached_queries"
  • D. Set the memcache service level to shared. Create a key called "cached_queries", and return database values from the key before using a query to Cloud SQL
Show Answer
Correct Answer:
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL
Question 13

You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP. Leveraging Google best practices, what should you do?

  • A. Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute Engine instances
  • B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances
  • C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances
  • D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances
Show Answer
Correct Answer:
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances
Question 14

Your company is expanding its AI-powered operations nationwide and has chosen accelerator-based compute for the AI workloads. The batch image processing workloads are not time-sensitive and can tolerate interruptions. You need to rapidly deploy cost-effective accelerator nodes for these batch tasks, ensuring rapid deployment and data persistence when necessary. What should you do?

  • A. Deploy standard VMs with configured accelerators and attached persistent disks
  • B. Deploy spot VMs with attached persistent disks and implement checkpoint mechanisms
  • C. Deploy spot VMs with local SSD to reduce time for bursty workloads
  • D. Deploy Cloud Run functions with ephemeral local SSD
Show Answer
Correct Answer:
D. Deploy Cloud Run functions with ephemeral local SSD
Question 15

You are designing the storage architecture for a financial analytics platform. The platform ingests and stores terabytes of transactional data daily, which is used for both real-time fraud detection and long-term historical analysis. Transaction data from the last 30 days must be accessible with very low latency for the fraud detection engine. Data older than 30 days is accessed infrequently for quarterly reports, where retrieval times of a few seconds are acceptable. All data must be retained for five years to meet compliance regulations. You need to design a solution as cost-effective as possible. What should you do?

  • A. Store all transaction data in a Cloud Storage bucket using the Standard storage class for the entire five-year retention period
  • B. Ingest all data into BigQuery using time-partitioned tables, and rely on BigQuery’s automatic long-term storage pricing for data older than 90 days
  • C. Configure a Cloud Storage bucket with an Object Lifecycle Management policy to transition data from the Standard class to the Archive class after 30 days
  • D. Configure a Cloud Storage bucket with an Object Lifecycle Management policy to transition data from the Standard class to the Coldline class after 30 days
Show Answer
Correct Answer:
D. Configure a Cloud Storage bucket with an Object Lifecycle Management policy to transition data from the Standard class to the Coldline class after 30 days
Question 16

You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?

  • A. Google Cloud SQL
  • B. Google Cloud Bigtable
  • C. Google Cloud Storage
  • D. Google Cloud Datastore
Show Answer
Correct Answer:
B. Google Cloud Bigtable
Question 17

You are managing an application deployed on Cloud Run for Anthos, and you need to define a strategy for deploying new versions of the application. You want to evaluate the new code with a subset of production traffic to decide whether to proceed with the rollout. What should you do?

  • A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions
  • B. Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancing instance in front of both services
  • C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch. As part of the Cloud Build trigger, configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version
  • D. In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run. Configure Traffic Director to send a small percentage of traffic to the new version of the application
Show Answer
Correct Answer:
A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions
Question 18

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? (Choose three.)

  • A. Delete the virtual machine (VM) and disks and create a new one
  • B. Delete the instance, attach the disk to a new VM, and investigate
  • C. Take a snapshot of the disk and connect to a new machine to investigate
  • D. Check inbound firewall rules for the network the machine is connected to
  • E. Connect the machine to another network with very simple firewall rules and investigate
  • F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate
Show Answer
Correct Answer:
  • C. Take a snapshot of the disk and connect to a new machine to investigate
  • D. Check inbound firewall rules for the network the machine is connected to
  • F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate
Question 19

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution. What change in the on-premises architecture should you make?

  • A. Replace RabbitMQ with Google Pub/Sub
  • B. Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL
  • C. Resize compute resources to match predefined Compute Engine machine types
  • D. Containerize the micro-services and host them in Google Kubernetes Engine
Show Answer
Correct Answer:
D. Containerize the micro-services and host them in Google Kubernetes Engine
Question 20

You are deploying a highly confidential data processing workload on Google Cloud. Your company’s compliance framework mandates that cryptographic keys used for encrypting data at rest must be generated and stored exclusively within a validated Hardware Security Module (HSM). You want to use a fully integrated Google Cloud managed service to handle the lifecycle and usage of these keys. What should you do?

  • A. Use Customer-Supplied Encryption Keys (CSEK) by providing your on-premises generated key with each API request
  • B. Import your on-premises HSM key material into a Cloud KMS key with the SOFTWARE protection level
  • C. Create a new key in Cloud Key Management Service (Cloud KMS) with the HSM protection level
  • D. Configure Cloud External Key Manager (Cloud EKM) to connect to your on-premises HSM
Show Answer
Correct Answer:
C. Create a new key in Cloud Key Management Service (Cloud KMS) with the HSM protection level

Aced these? Get the Full Exam

Download the complete GCP-PCA study bundle with 304+ questions in a single printable PDF.