Free GCP-PCDE Sample Questions — Google Cloud Platform - Professional Cloud Database Engineer

Free GCP-PCDE sample questions for the Google Cloud Platform - Professional Cloud Database Engineer exam. No account required: study at your own pace.

Want an interactive quiz? Take the full GCP-PCDE practice test

Looking for more? Click here to get the full PDF with 168+ practice questions for $10 for offline study and deeper preparation.

Question 1

Your organization has strict policies on tracking rollouts to production and periodically shares this information with external auditors to meet compliance requirements. You need to enable auditing on several Cloud Spanner databases. What should you do?

  • A. Use replication to roll out changes to higher environments
  • B. Use backup and restore to roll out changes to higher environments
  • C. Use Liquibase to roll out changes to higher environments
  • D. Manually capture detailed DBA audit logs when changes are rolled out to higher environments
Show Answer
Correct Answer:
C. Use Liquibase to roll out changes to higher environments
Question 2

Your digital-native business runs its database workloads on Cloud SQL. Your website must be globally accessible 24/7. You need to prepare your Cloud SQL instance for high availability (HA). You want to follow Google-recommended practices. What should you do? (Choose two.)

  • A. Set up manual backups
  • B. Create a PostgreSQL database on-premises as the HA option
  • C. Configure single zone availability for automated backups
  • D. Enable point-in-time recovery
  • E. Schedule automated backups
Show Answer
Correct Answer:
  • D. Enable point-in-time recovery
  • E. Schedule automated backups
Question 3

You want to migrate an on-premises 100 TB Microsoft SQL Server database to Google Cloud over a 1 Gbps network link. You have 48 hours allowed downtime to migrate this database. What should you do? (Choose two.)

  • A. Use a change data capture (CDC) migration strategy
  • B. Move the physical database servers from on-premises to Google Cloud
  • C. Keep the network bandwidth at 1 Gbps, and then perform an offline data migration
  • D. Increase the network bandwidth to 2 Gbps, and then perform an offline data migration
  • E. Increase the network bandwidth to 10 Gbps, and then perform an offline data migration
Show Answer
Correct Answer:
  • A. Use a change data capture (CDC) migration strategy
  • E. Increase the network bandwidth to 10 Gbps, and then perform an offline data migration
Question 4

You are building a data warehouse on BigQuery. Sources of data include several MySQL databases located on-premises. You need to transfer data from these databases into BigQuery for analytics. You want to use a managed solution that has low latency and is easy to set up. What should you do?

  • A. Use Datastream to connect to your on-premises database and create a stream. Have Datastream write to Cloud Storage. Then use Dataflow to process the data into BigQuery
  • B. Use Cloud Data Fusion and scheduled workflows to extract data from MySQL. Transform this data into the appropriate schema, and load this data into your BigQuery database
  • C. Use Database Migration Service to replicate data to a Cloud SQL for MySQL instance. Create federated tables in BigQuery on top of the replicated instances to transform and load the data into your BigQuery database
  • D. Create extracts from your on-premises databases periodically, and push these extracts to Cloud Storage. Upload the changes into BigQuery, and merge them with existing tables
Show Answer
Correct Answer:
A. Use Datastream to connect to your on-premises database and create a stream. Have Datastream write to Cloud Storage. Then use Dataflow to process the data into BigQuery
Question 5

Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?

  • A. Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption
  • B. Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance
  • C. Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted
  • D. Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption
Show Answer
Correct Answer:
C. Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted
Question 6

You have deployed a Cloud SQL for SQL Server instance. In addition, you created a cross-region read replica for disaster recovery (DR) purposes. Your company requires you to maintain and monitor a recovery point objective (RPO) of less than 5 minutes. You need to verify that your cross-region read replica meets the allowed RPO. What should you do?

  • A. Use Cloud SQL instance monitoring
  • B. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL
  • C. Use Cloud SQL logs
  • D. Use the SQL Server Always On Availability Group dashboard
Show Answer
Correct Answer:
D. Use the SQL Server Always On Availability Group dashboard
Question 7

You use Python scripts to generate weekly SQL reports to assess the state of your databases and determine whether you need to reorganize tables or run statistics. You want to automate this report but need to minimize operational costs and overhead. What should you do?

  • A. Create a VM in Compute Engine, and run a cron job
  • B. Create a Cloud Composer instance, and create a directed acyclic graph (DAG)
  • C. Create a Cloud Function, and call the Cloud Function using Cloud Scheduler
  • D. Create a Cloud Function, and call the Cloud Function from a Cloud Tasks queue
Show Answer
Correct Answer:
C. Create a Cloud Function, and call the Cloud Function using Cloud Scheduler
Question 8

You are setting up a new AlloyDB instance and want users to be able to use their existing Identity and Access Management (IAM) identities to connect to AlloyDB. You have performed the following steps: • Manually enabled IAM authentication on the AlloyDB instance • Granted the alloydb.databaseUser and ser-viceusage.serviceUsageconsumer IAM roles to the users • Created new AlloyDB database users based on corresponding IAM identities Users are able to connect but are reporting that they are not able to SELECT from application tables. What should you do?

  • A. Grant the new database users access privileges to the appropriate tables
  • B. Grant the alloydb.client IAM role to each user
  • C. Grant the alloydb.viewer IAM role to each user
  • D. Grant the alloydb.alloydbreplica IAM role to each user
Show Answer
Correct Answer:
A. Grant the new database users access privileges to the appropriate tables
Question 9

Your company wants you to migrate their Oracle, MySQL, Microsoft SQL Server, and PostgreSQL relational databases to Google Cloud. You need a fully managed, flexible database solution when possible. What should you do?

  • A. Migrate all the databases to Cloud SQL
  • B. Migrate the Oracle, MySQL, and Microsoft SQL Server databases to Cloud SQL, and migrate the PostgreSQL databases to Compute Engine
  • C. Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Compute Engine, and migrate the Oracle databases to Bare Metal Solution for Oracle
  • D. Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Cloud SQL, and migrate the Oracle databases to Bare Metal Solution for Oracle
Show Answer
Correct Answer:
D. Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Cloud SQL, and migrate the Oracle databases to Bare Metal Solution for Oracle
Question 10

Your rapidly growing ecommerce company is migrating their analytics workloads to AlloyDB for PostgreSQL. You anticipate a significant increase in reporting queries as the business scales. You need a read pool strategy to scale your analytics operations in anticipation of future growth while minimizing costs. What should you do?

  • A. Direct all complex, long-running analytics queries to the primary instance, and only use read pools for short, frequent reports
  • B. Change the instance sizes of the read nodes in the read pool
  • C. Begin with minimal read pools and iteratively expand or shrink them based on real-time load monitoring to optimize resource allocation
  • D. Assign all reporting queries to a single, large read pool to maximize the combined compute resources available for analytics
Show Answer
Correct Answer:
C. Begin with minimal read pools and iteratively expand or shrink them based on real-time load monitoring to optimize resource allocation
Question 11

You are the database administrator of a Cloud SQL for PostgreSQL instance that has pgaudit disabled. Users are complaining that their queries are taking longer to execute and performance has degraded over the past few months. You need to collect and analyze query performance data to help identity slow-running queries. What should you do?

  • A. View Cloud SQL operations to view historical query information
  • B. White a Logs Explorer query to identify database queries with high execution times
  • C. Review application logs to identify database calls
  • D. Use the Query Insights dashboard to identify high execution times
Show Answer
Correct Answer:
D. Use the Query Insights dashboard to identify high execution times
Question 12

An analytics team needs to read data out of Cloud SQL for SQL Server and update a table in Cloud Spanner. You need to create a service account and grant least privilege access using predefined roles. What roles should you assign to the service account?

  • A. roles/cloudsql.viewer and roles/spanner.databaseUser
  • B. roles/cloudsql.editor and roles/spanner.admin
  • C. roles/cloudsql.client and roles/spanner.databaseReader
  • D. roles/cloudsql.instanceUser and roles/spanner.databaseUser
Show Answer
Correct Answer:
A. roles/cloudsql.viewer and roles/spanner.databaseUser
Question 13

You are migrating your critical production database from Amazon RDS for MySQL to Cloud SQL for MySQL by using Google Cloud's Database Migration Service. You want to keep disruption to your production database to a minimum and, at the same time, optimize migration performance. What should you do?

  • A. Create and start multiple Database Migration Service jobs to migrate your database to the target Cloud SQL for MySQL instance
  • B. Upgrade the Amazon RDS for MySQL primary instance to an instance with more vCPUs and memory, and then run Google Cloud's Database Migration Service
  • C. Create a single Database Migration Service migration job with initial load parallelism configured to maximum on the source Amazon RDS for MySQL read replica
  • D. Create a single Database Migration Service migration job with initial Load Parallelism configured to Maximum on the Amazon RDS for MySQL primary instance
Show Answer
Correct Answer:
C. Create a single Database Migration Service migration job with initial load parallelism configured to maximum on the source Amazon RDS for MySQL read replica
Question 14

You are running an instance of Cloud Spanner as the backend of your ecommerce website. You learn that the quality assurance (QA) team has doubled the number of their test cases. You need to create a copy of your Cloud Spanner database in a new test environment to accommodate the additional test cases. You want to follow Google-recommended practices. What should you do?

  • A. Use Cloud Functions to run the export in Avro format
  • B. Use Cloud Functions to run the export in text format
  • C. Use Dataflow to run the export in Avro format
  • D. Use Dataflow to run the export in text format
Show Answer
Correct Answer:
C. Use Dataflow to run the export in Avro format
Question 15

You are deploying a Cloud SQL for MySQL database to serve a non-critical application. The database size is 10 GB and will be updated every night with data stored in a Cloud Storage bucket. The database serves read-only traffic from the application during the day. The data locality requirement of this application mandates that data must reside in a single region. You want to minimize the cost of running this database while maintaining an RTO of 1 day. What should you do?

  • A. Create a Cloud SQL for MySQL instance with high availability (HA) enabled. Configure automated backups of the Cloud SQL instance, and use the default backup location
  • B. Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Create a read replica in the same zone
  • C. Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Create a read replica in a second region
  • D. Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Configure automated backups of the Cloud SQL instance, and use a custom backup location to store backups in a Cloud Storage bucket in the same region
Show Answer
Correct Answer:
D. Create a Cloud SQL for MySQL instance with high availability (HA) disabled. Configure automated backups of the Cloud SQL instance, and use a custom backup location to store backups in a Cloud Storage bucket in the same region
Question 16

Your organization has a critical business app that is running with a Cloud SQL for MySQL backend database. Your company wants to build the most fault-tolerant and highly available solution possible. You need to ensure that the application database can survive a zonal and regional failure with a primary region of us-central1 and the backup region of us-east1. What should you do?

  • A. 1. Provision a Cloud SQL for MySQL instance in us-central1-a. 2. Create a multiple-zone instance in us-west1-b. 3. Create a read replica in us-east1-c
  • B. 1. Provision a Cloud SQL for MySQL instance in us-central1-a. 2. Create a multiple-zone instance in us-central1-b. 3. Create a read replica in us-east1-b
  • C. 1. Provision a Cloud SQL for MySQL instance in us-central1-a. 2. Create a multiple-zone instance in us-east-b. 3. Create a read replica in us-east1-c
  • D. 1. Provision a Cloud SQL for MySQL instance in us-central1-a. 2. Create a multiple-zone instance in us-east1-b. 3. Create a read replica in us-central1-b
Show Answer
Correct Answer:
B. 1. Provision a Cloud SQL for MySQL instance in us-central1-a. 2. Create a multiple-zone instance in us-central1-b. 3. Create a read replica in us-east1-b
Question 17

You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?

  • A. Migrate to Compute Engine
  • B. Migrate to Bare Metal Solution for Oracle
  • C. Migrate to Google Kubernetes Engine (GKE)
  • D. Migrate to Google Cloud VMware Engine
Show Answer
Correct Answer:
B. Migrate to Bare Metal Solution for Oracle
Question 18

You are designing a payments processing application on Google Cloud. The application must continue to serve requests and avoid any user disruption if a regional failure occurs. You need to use AES-256 to encrypt data in the database, and you want to control where you store the encryption key. What should you do?

  • A. Use Cloud Spanner with a customer-managed encryption key (CMEK)
  • B. Use Cloud Spanner with default encryption
  • C. Use Cloud SQL with a customer-managed encryption key (CMEK)
  • D. Use Bigtable with default encryption
Show Answer
Correct Answer:
A. Use Cloud Spanner with a customer-managed encryption key (CMEK)
Question 19

Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?

  • A. Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys
  • B. Use Cloud SQL Auth proxy
  • C. Connect to Cloud SQL using a connection that has SSL encryption
  • D. Use customer-managed encryption keys with Cloud SQL
Show Answer
Correct Answer:
D. Use customer-managed encryption keys with Cloud SQL
Question 20

Your organization has a security policy to ensure that all Cloud SQL for PostgreSQL databases are secure. You want to protect sensitive data by using a key that meets specific locality or residency requirements. Your organization needs to control the key's lifecycle activities. You need to ensure that data is encrypted at rest and in transit. What should you do?

  • A. Create the database with Google-managed encryption keys
  • B. Create the database with customer-managed encryption keys
  • C. Create the database persistent disk with Google-managed encryption keys
  • D. Create the database persistent disk with customer-managed encryption keys
Show Answer
Correct Answer:
B. Create the database with customer-managed encryption keys

Aced these? Get the Full Exam

Download the complete GCP-PCDE study bundle with 168+ questions in a single printable PDF.