Looking for more? Click here to get the full PDF with 318+ practice questions for $10 for offline study and deeper preparation.
Question 1
A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users. How should the Database Specialist apply the parameter group change for the DB instance?
A. Select the option to apply the change immediately
B. Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
C. Apply the change manually by rebooting the DB instance during the approved maintenance window
D. Reboot the secondary Multi-AZ DB instance
Show Answer
Correct Answer:
C. Apply the change manually by rebooting the DB instance during the approved maintenance window
Question 2
A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form. Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?
A. Create the database with the MasterUserName and MasterUserPassword properties set to the default values. Then, create the secret with the user name and password set to the same default values. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database. Finally, update the secret's password value with a randomly generated string set by the GenerateSecretString property
B. Add a Mapping property from the database Amazon Resource Name (ARN) to the secret ARN. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret
C. Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property. Then, define the database user name in the SecureStringTemplate template. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword properties. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database
D. Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database ARN. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret
Show Answer
Correct Answer:
C. Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property. Then, define the database user name in the SecureStringTemplate template. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword properties. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database
Question 3
A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests. Which should the database specialist do to allow the database team to create the test tables?
A. Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables
B. Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option
C. Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables
D. Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables
Show Answer
Correct Answer:
D. Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables
Question 4
A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes. Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?
A. Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region
B. Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables
C. Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table
D. Create an AWS Backup plan and assign the DynamoDB table as a resource
Show Answer
Correct Answer:
B. Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables
Question 5
A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads. Which solution will meet these requirements in the MOST operationally efficient manner?
A. Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data
B. Use Amazon Redshift for relational data and JSON data
C. Use Amazon RDS for relational data. Use Amazon Neptune for JSON data
D. Use Amazon Redshift for relational data. Use Amazon S3 for JSON data
Show Answer
Correct Answer:
B. Use Amazon Redshift for relational data and JSON data
Question 6
A coffee machine manufacturer is equipping all of its coffee machines with IoT sensors. The IoT core application is writing measurements for each record to Amazon Timestream. The records have multiple dimensions and measures. The measures include multiple measure names and values. An analysis application is running queries against the Timestream database and is focusing on data from the current week. A database specialist needs to optimize the query costs of the analysis application. Which solution will meet these requirements?
A. Ensure that queries contain whole records over the relevant time range
B. Use time range, measure name, and dimensions in the WHERE clause of the query
C. Avoid canceling any query after the query starts running
D. Implement exponential backoff in the application
Show Answer
Correct Answer:
B. Use time range, measure name, and dimensions in the WHERE clause of the query
Question 7
A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To meet the company's disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient. What should a Database Specialist do to copy the database backup into a different Region?
A. Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
B. Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region
C. Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region
D. Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica
Show Answer
Correct Answer:
C. Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region
Question 8
A software company is conducting a security audit of its three-node Amazon Aurora MySQL DB cluster. Which finding is a security concern that needs to be addressed?
A. The AWS account root user does not have the minimum privileges required for client applications
B. Encryption in transit is not configured for all Aurora native backup processes
C. Each Aurora DB cluster node is not in a separate private VPC with restricted access
D. The IAM credentials used by the application are not rotated regularly
Show Answer
Correct Answer:
D. The IAM credentials used by the application are not rotated regularly
Question 9
A company requires near-real-time notifications when changes are made to Amazon RDS DB security groups. Which solution will meet this requirement with the LEAST operational overhead?
A. Configure an RDS event notification subscription for DB security group events
B. Create an AWS Lambda function that monitors DB security group changes. Create an Amazon Simple Notification Service (Amazon SNS) topic for notification
C. Turn on AWS CloudTrail. Configure notifications for the detection of changes to DB security groups
D. Configure an Amazon CloudWatch alarm for RDS metrics about changes to DB security groups
Show Answer
Correct Answer:
A. Configure an RDS event notification subscription for DB security group events
Question 10
A company wants to migrate its on-premises Oracle database to a managed open-source database engine in Amazon RDS by using AWS Database Migration Service (AWS DMS). A database specialist needs to identify the target engine in Amazon RDS based on the conversion percentage of database code objects such as stored procedures, functions, views, and database storage objects. The company will select the engine that has the least manual conversion effort. What should the database specialist do to identify the target engine?
A. Use the AWS Schema Conversion Tool (AWS SCT) database migration assessment report
B. Use the AWS Schema Conversion Tool (AWS SCT) multiserver assessor
C. Use an AWS DMS pre-migration assessment
D. Use the AWS DMS data validation tool
Show Answer
Correct Answer:
B. Use the AWS Schema Conversion Tool (AWS SCT) multiserver assessor
Question 11
A company has a Microsoft SQL Server 2017 Enterprise edition on Amazon RDS database with the Multi-AZ option turned on. Automatic backups are turned on and the retention period is set to 7 days. The company needs to add a read replica to the RDS DB instance. How should a database specialist achieve this task?
A. Turn off the Multi-AZ feature, add the read replica, and turn Multi-AZ back on again
B. Set the backup retention period to 0, add the read replica, and set the backup retention period to 7 days again
C. Restore a snapshot to a new RDS DB instance and add the DB instance as a replica to the original database
D. Add the new read replica without making any other changes to the RDS database
Show Answer
Correct Answer:
D. Add the new read replica without making any other changes to the RDS database
Question 12
A company has an AWS CloudFormation template written in JSON that is used to launch new Amazon RDS for MySQL DB instances. The security team has asked a database specialist to ensure that the master password is automatically rotated every 30 days for all new DB instances that are launched using the template. What is the MOST operationally efficient solution to meet these requirements?
A. Save the password in an Amazon S3 object. Encrypt the S3 object with an AWS KMS key. Set the KMS key to be rotated every 30 days by setting the EnableKeyRotation property to true. Use a CloudFormation custom resource to read the S3 object to extract the password
B. Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30
C. Modify the CloudFormation template to use the AWS KMS key as the database password. Configure an Amazon EventBridge rule to invoke the KMS API to rotate the key every 30 days by setting the ScheduleExpression parameter to ***/30***
D. Integrate the Amazon RDS for MySQL DB instances with AWS IAM and centrally manage the master database user password
Show Answer
Correct Answer:
B. Create an AWS Lambda function to rotate the secret. Modify the CloudFormation template to add an AWS::SecretsManager::RotationSchedule resource. Configure the RotationLambdaARN value and, for the RotationRules property, set the AutomaticallyAfterDays parameter to 30
Question 13
A company hosts an online gaming application on AWS. A single Amazon DynamoDB table contains one item for each registered user. The partition key for each item is the user's ID. A daily report generator computes the sum totals of two well-known attributes for all items in the table that contain a dimension attribute. As the number of users grows, the report generator takes more time to generate the report. Which combination of steps will minimize the time it takes to generate the report? (Choose two.)
A. Create a global secondary index (GSI) that uses the user ID as the partition key and the dimension attribute as the sort key. Use the GSI to project the two attributes that the report generator uses to compute the sum totals
B. Create a local secondary index (LSI) that uses the user ID as the partition key and the dimension attribute as the sort key. Use the LSI to project the two attributes that the report generator uses to compute the sum totals
C. Modify the report generator to query the index instead of the table
D. Modify the report generator to scan the index instead of the table
E. Modify the report generator to call the BatchGetItem operation
Show Answer
Correct Answer:
A. Create a global secondary index (GSI) that uses the user ID as the partition key and the dimension attribute as the sort key. Use the GSI to project the two attributes that the report generator uses to compute the sum totals
C. Modify the report generator to query the index instead of the table
Question 14
A company is running an Amazon RDS for PostgreSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime. What is the FASTEST way to accomplish this?
A. Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster
B. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster
C. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster
D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover
Show Answer
Correct Answer:
D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover
Question 15
A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora. Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?
A. Stop the DB cluster and analyze how the website responds
B. Use Aurora fault injection to crash the master DB instance
C. Remove the DB cluster endpoint to simulate a master DB instance failure
D. Use Aurora Backtrack to crash the DB cluster
Show Answer
Correct Answer:
B. Use Aurora fault injection to crash the master DB instance
Question 16
A company wants to use AWS Organizations to create isolated accounts for different teams and functionality. The company’s database administrator needs to copy a DB instance from the main account in the us-east-1 Region to a new test account in the us-west-2 Region. The database administrator has already taken a snapshot of the encrypted Amazon RDS for PostgreSQL source DB instance in the main account. Which combination of steps must the database administrator take to copy the snapshot to the new account? (Choose three.)
A. Create a new AWS Key Management Service (AWS KMS) customer managed key in the main account in us-east-1. Replicate the key ID and key material to the test account in us-west-2
B. Create a new AWS Key Management Service (AWS KMS) customer managed key in the main account in us-east-1. Copy the key to the test account in us-west-2
C. Copy the snapshot of the source DB instance to us-west-2 by using the AWS Key Management Service (AWS KMS) customer managed key. Enable encryption on the new snapshot. Share the snapshot with the test account
D. Copy the snapshot of the source DB instance to the test account in us-east-1. Switch to the test account and share the snapshot with us-west-2
E. In the test account, copy the shared snapshot to create a final snapshot. Use the final snapshot to create a new RDS for PostgreSQL DB instance
F. In the test account, copy the shared snapshot by using the copied AWS Key Management Service (AWS KMS) key to create a final encrypted snapshot. Use the final snapshot to create a new RDS for PostgreSQL DB instance
Show Answer
Correct Answer:
B. Create a new AWS Key Management Service (AWS KMS) customer managed key in the main account in us-east-1. Copy the key to the test account in us-west-2
C. Copy the snapshot of the source DB instance to us-west-2 by using the AWS Key Management Service (AWS KMS) customer managed key. Enable encryption on the new snapshot. Share the snapshot with the test account
F. In the test account, copy the shared snapshot by using the copied AWS Key Management Service (AWS KMS) key to create a final encrypted snapshot. Use the final snapshot to create a new RDS for PostgreSQL DB instance
Question 17
An ecommerce company recently migrated one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition DB instance. The company expects a spike in read traffic due to an upcoming sale. A database specialist must create a read replica of the DB instance to serve the anticipated read traffic. Which actions should the database specialist take before creating the read replica? (Choose two.)
A. Identify a potential downtime window and stop the application calls to the source DB instance
B. Ensure that automatic backups are enabled for the source DB instance
C. Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups
D. Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM)
E. Modify the read replica parameter group setting and set the value to 1
Show Answer
Correct Answer:
B. Ensure that automatic backups are enabled for the source DB instance
C. Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups
Question 18
A database specialist needs to delete user data and sensor data 1 year after it was loaded in an Amazon DynamoDB table. TTL is enabled on one of the attributes. The database specialist monitors TTL rates on the Amazon CloudWatch metrics for the table and observes that items are not being deleted as expected. What is the MOST likely reason that the items are not being deleted?
A. The TTL attribute's value is set as a Number data type
B. The TTL attribute's value is set as a Binary data type
C. The TTL attribute's value is a timestamp in the Unix epoch time format in seconds
D. The TTL attribute's value is set with an expiration of 1 year
Show Answer
Correct Answer:
B. The TTL attribute's value is set as a Binary data type
Question 19
A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level. How can the Database Specialist meet these requirements?
A. Use AWS IAM database authentication and restrict access to the tables using an IAM policy
B. Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster
C. Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data
D. Define access privileges to the tables containing sensitive data in the pg_hba.conf file
Show Answer
Correct Answer:
C. Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data
Question 20
A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis. How should a database specialist automate the process of backing up the cluster data in compliance with these policies?
A. Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis
B. Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot
C. Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destination Region, specifying the source and destination KMS key IDs in the replication configuration
D. Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot
Show Answer
Correct Answer:
B. Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot
Aced these? Get the Full Exam
Download the complete DBS-C01 study bundle with 318+ questions in a single printable PDF.