Reader small image

You're reading from  AWS Certified Database – Specialty (DBS-C01) Certification Guide

Product typeBook
Published inMay 2022
PublisherPackt
ISBN-139781803243108
Edition1st Edition
Right arrow
Author (1)
Kate Gawron
Kate Gawron
author image
Kate Gawron

Kate Gawron is a full-time senior database consultant and part-time future racing driver. She was a competitor in Formula Woman, and she aspires to become a professional Gran Turismo (GT) racing driver. Away from the racetrack, Kate has worked with Oracle databases for 18 years and AWS for five years. She holds four AWS certifications, including the AWS Certified Database – Specialty certification as well as two professional Oracle qualifications. Kate currently works as a senior database architect, where she works with customers to migrate and refactor their databases to work optimally within the AWS cloud.
Read more about Kate Gawron

Right arrow

Chapter 3

  1. 4

An internet gateway is not used with a private IP, so this answer is incorrect.

Security groups have all outbound ports open by default, so there is no need to open port 80 specifically.

A private subnet can connect to the internet with the correct configuration.

A correctly configured route table is required for any internet connectivity, so answer 4 is correct.

  1. 3

A security group does not allow connections to other AWS services such as S3 and RDS by default, so this is incorrect.

Security groups block all inbound traffic by default, so this answer is incorrect.

Security groups do allow all outbound traffic by default, so 3 is the correct answer.

A route table is required for an internet gateway to be used, so this is incorrect.

  1. 1 and 4

Each subnet can only be deployed in a single AZ, so 1 is a correct answer.

The smallest CIDR block you can allow is /28, so this is incorrect.

Private subnets connect to...

Chapter 4

  1. 3

Moving from RDS to EC2 doesn't solve the storage problem. RDS also has autoscaling.

Using S3 might reduce the incoming writes, but it is very complex, therefore, this isn't correct.

Enabling autoscaling storage is the easiest solution, and is the correct answer.

A read replica will not help with full storage, so this cannot be correct.

  1. 2

If you cannot connect, then the account details don't matter at this stage, so this isn't correct.

The inbound rules are likely blocking any connections to the RDS instance from your EC2, so this is the correct answer.

The outbound rules are set to allow all outbound traffic by default, so this is unlikely to be the cause.

As you are connecting from within your VPC, you will not need a NAT to access it, so this is incorrect.

  1. 1

Primary is upgraded first, so this is the correct answer.

There is no downtime during an upgrade, so this is not correct.

The standby...

Chapter 5

  1. 3

Creating a multi-AZ deployment with another read replica works, but it isn't the most cost-efficient.

Moving to EC2 is not a valid solution.

Creating a multi-AZ read replica is a cost-effective and highly-available solution, and is correct.

Creating two single-AZ read replicas is not highly available.

  1. 4

RDS only supports multi-AZ in the same region.

Aurora Global Tables will work, but this is the incorrect process, as you need read replicas.

Deploying MySQL in two regions will not work as there is no replication.

Global tables will work and it uses read replicas, so this is the correct answer.

  1. 2

Aurora does not fully meet the needs of this scenario.

Aurora Serverless will meet the needs of the temporary nature of the application.

RDS does not fully meet the needs of this scenario.

MySQL on EC does not fully meet the needs of this scenario.

  1. 1

Creating read replicas in a different region...

Chapter 6

  1. 4

TTL will only help if it can remove older records; therefore, this won't immediately help and is incorrect.

DAX may improve performance but it is not the most cost-efficient option.

DynamoDB Streams will not help any performance issue.

Autoscaling is the correct answer, as the issue is a restriction of provisioned capacity units.

  1. 1

An item is 3 KB, so you need one RCU per two items in eventually consistent mode, and three WCUs per item as standard write. This gives you 50 RCUs and 30 WCUs.

  1. 4

25 GB is provided free for each account when using DynamoDB.

  1. 2

An LSI can only be provisioned when the table is created.

A GSI can contain different keys to the base table.

  1. 3

This error is seen when using DAX and not when using DynamoDB, so autoscaling won't help.

This error is seen when using DAX and not when using DynamoDB, so on-demand won't help.

This is a DAX throttling error...

Chapter 7

  1. 2

Increasing the queue to the highest priority will improve the performance, but this will impact other queries.

Increasing concurrency scaling will allow the queue to scale as required.

A query monitoring rule will take too long to scale and will not fix the problem.

Queue hopping only works for manually configured workload management.

  1. 4

The clue in the question is least possible customization and coding. All of these answers will work, but the simplest is using QuickSight directly, as it can query all of those sources. This was partially a trick question by putting in the RedShift option.

  1. 4

As the data can be easily recovered, there is no need to take backups. The other answers all still involve taking backups, so they are not correct.

  1. 3

DocumentDB is the only solution that supports JSON document querying.

  1. 4

DocumentDB has a document limit of 16 MB, so the 20 MB document is too large.

Chapter 8

  1. 2 and 4

The Neptune VPC endpoint won't be used to import from S3.

You do need an S3 endpoint, so this is correct.

The EC2 does need to access S3, so this isn't correct.

Neptune will need an IAM role to access S3, so this is correct.

S3 will not be reading from Neptune so it does not need an IAM role.

  1. 3

You cannot delete from Timestream, so that is the only correct answer.

  1. 4

QLDB only supports 20 tables, so you have hit the maximum allowed.

  1. 1 and 5

Gremlin and SPARQL are the only languages supported by Neptune.

  1. 1

The _ql_committed tables show all the history for any modifications, and this is the correct answer.

Chapter 9

  1. 2 and 3

You do not know when the peak hours for this application are, so setting backups to midnight might make things worse.

Setting reserved-memory-percent stops backups from taking all the memory and is correct.

Running a backup from the read replica will also reduce the load on the primary instance and is correct.

Additional read replicas will not help as the load is on the primary instance.

Increasing the number of shards will not help as the load will remain the same.

  1. 1

Write-through applies the changes to the cache first before writing to the database so the data is always current, and this is the correct answer.

Lazy-loading is where the cache only loads data after it is requested and it doesn't maintain current data.

Cache-aside is where the application can directly access both the database and the cache.

Read-through is where the application can only access the cache directly.

  1. 1

Redis (cluster mode...

Chapter 10

  1. 3

Running a RAC cluster on EC2 doesn't meet the solution required.

A Data Pump export would work, but it will not meet the 5 minutes outage allowed.

Using DMS in CDC mode will allow the application to take only a minimal outage and is the correct answer.

SCT is typically used for changing a database engine and is not required in this scenario.

  1. 3

You cannot restore from s3 cross-region, so this will not work.

You can use S3 cross-region replication and restore, but there will be a long downtime, so this is not the best solution.

Using DMS will allow a migration with minimal downtime, so this is the best solution.

There is no cross-region replication for RDS SQL Server.

  1. 2

This is a method to reduce storage allocated but it will involve downtime.

Using CDC will allow the storage to be reduced with minimal downtime and is the best solution.

Using a backup and restore method may reduce storage needs but it...

Chapter 11

  1. 4

Enabling cross-region replication to pull all the s3 data into one region would work, but it isn't cost-efficient.

DMS cannot migrate a Glue catalog.

You cannot give permission for Glue to read another catalog in this way.

Glue can create a data catalog across all regions allowing Athena to query them, so this is the correct answer.

  1. 3

RDS will work but it is not cost-effective.

Redshift will work but it is not cost-effective.

Athena can directly query data from S3, so this is the most cost-effective solution.

Using EC2 to do this is complex and unnecessary.

  1. 2

Athena cannot query directly from Glacier.

Moving the data to standard S3 is the most cost-effective solution.

DynamoDB and Redshift would work, but they are not as cost-effective.

  1. 3 and 5

Revoking permissions is not a good solution.

Deletion protection does not exist for stacks.

Termination protection is a good solution, so...

Chapter 12

  1. 3

You cannot modify the login.cnf file on RDS.

Provisioning a database in a public subnet is not secure.

Provisioning a database in a private subnet protected by security groups is the correct answer.

Using NACLs can help further secure a VPC, but you also need security groups, so this is incorrect.

  1. 2

Exporting to S3 is not an option here.

Creating a snapshot, encrypting a copy of it, and then creating a new snapshot is the best option.

You cannot add encryption using Modify, so this is incorrect.

  1. 3

You cannot restore a snapshot into a database with encryption enabled.

Using IAM authentication for each individual user will remove the reliance on shared passwords and will enforce the policy of each individual having their own account.

  1. 2

Applications use the RDS endpoint to access the database, so the IP change would not break the service.

It is most likely the new EC2 is not in the security group...

Chapter 13

  1. 2 and 4

To do this, you need to enable CloudWatch alerts and use SNS to send the notifications to the specified recipients.

SES only handles emailing and not notifications.

SQS is used for application queues, not notifications.

Lambda is a complicated solution.

  1. 4

Using anomaly-detection-based rules in CloudWatch is the best solution to quickly find workload changes and spikes.

  1. 2

The simplest solution is to use CloudWatch metrics and SNS to send the notification to the email address.

Chapter 14

  1. 1 and 3

Changing the times the backups run to outside peak hours is a good solution.

Increasing the instance class may help but it will not be cost-effective.

Using a read replica for backups will reduce the load on the primary node and is a good solution.

Increasing the number of shards will not help.

Increasing the storage will not help.

Changing to provisioned IOPS will not help.

  1. 1

Creating an AWS Backup policy for 90 days and applying it to all RDS instances is the simplest solution.

Modifying each RDS instance with 90-day backup retention will work but it is not the best solution, as new instances may get missed and it is a manual effort.

Using Lambda is a very complicated solution.

There is no such feature on RDS to push backups to Glacier.

  1. 2

A read replica in a secondary region is a very expensive solution.

AWS Backups can store backups to another region, so this is the best solution.

Manually...

Chapter 15

  1. 1

Using Application Insights will help identify the root cause quickly and is the correct answer.

Performance Insights is useful if this is definitely a database problem, but it will not help find any potential application issue.

AWS X-Ray is used to diagnose issues with microservices and it does not meet the needs of this use case.

Contacting AWS support is not a good solution, as they will not understand your application.

  1. 3

All answers could be correct, but the most likely answer is that security groups have been altered, which will stop your application from connecting to the database.

  1. 3

This command misses the -apply-immediately flag.

The wrong syntax is used –alter-database is wrong.

Modify-db-instance is correct and this command has the -apply-immediately flag, so this is the correct answer.

The wrong syntax is used –alter-db-instance is wrong.

Chapter 16

  1. 1

Using TTL will automatically remove data older than 30 days, and is the correct answer.

A Lambda function will work but it is complex and not the best solution.

Creating a new DynamoDB table is not a good solution.

DynamoDB Streams won't remove the items from the source table.

  1. 1

Creating a separate UAT database is the best option and, therefore, a PITR recovery in a different region is the best solution.

Using DynamoDB Streams and Lambda is a very complicated solution.

Using Glue would likely not work for the migration.

Adding Global Tables would mean that changes made by testing would be written to the production database and, therefore, this doesn't meet the needs of the business.

  1. 4

You cannot modify parameters at the RDS instance level using SET, so this isn't correct.

You cannot modify the parameters in the default parameter group.

Modifying the instance to use the default parameter group...

Why subscribe?

  • Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
  • Improve your learning with Skill Plans built especially for you
  • Get a free eBook or video every month
  • Fully searchable for easy access to vital information
  • Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at customercare@packtpub.com for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
AWS Certified Database – Specialty (DBS-C01) Certification Guide
Published in: May 2022Publisher: PacktISBN-13: 9781803243108
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Kate Gawron

Kate Gawron is a full-time senior database consultant and part-time future racing driver. She was a competitor in Formula Woman, and she aspires to become a professional Gran Turismo (GT) racing driver. Away from the racetrack, Kate has worked with Oracle databases for 18 years and AWS for five years. She holds four AWS certifications, including the AWS Certified Database – Specialty certification as well as two professional Oracle qualifications. Kate currently works as a senior database architect, where she works with customers to migrate and refactor their databases to work optimally within the AWS cloud.
Read more about Kate Gawron