exam questions

Exam AWS Certified Developer Associate All Questions

View all questions & answers for the AWS Certified Developer Associate exam

Exam AWS Certified Developer Associate topic 1 question 110 discussion

Exam question from Amazon's AWS Certified Developer Associate
Question #: 110
Topic #: 1
[All AWS Certified Developer Associate Questions]

A developer manages an application that interacts with Amazon RDS. After observing slow performance with read queries, the developer implements Amazon
ElastiCache to update the cache immediately following the primary database update.
What will be the result of this approach to caching?

  • A. Caching will increase the load on the database instance because the cache is updated for every database update.
  • B. Caching will slow performance of the read queries because the cache is updated when the cache cannot find the requested data.
  • C. The cache will become large and expensive because the infrequently requested data is also written to the cache.
  • D. Overhead will be added to the initial response time because the cache is updated only after a cache miss.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
CHRIS12722222
Highly Voted 3 years, 3 months ago
C. This is write through strategy
upvoted 19 times
...
Awsexam100
Highly Voted 3 years, 1 month ago
its D There is a cache miss penalty. Each cache miss results in three trips: Initial request for data from the cache Query of the database for the data Writing the data to the cache These misses can cause a noticeable delay in data getting to the application. https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html Lazy loading
upvoted 6 times
JonasKahnwald
5 months, 2 weeks ago
We don't use lazy Loading here, we use Write-Through.
upvoted 1 times
...
...
seml
Most Recent 2 months, 2 weeks ago
Selected Answer: A
The primary concern is increased db load and not cache size and that eliminates C.
upvoted 1 times
...
sumanshu
4 months, 1 week ago
Selected Answer: A
C) Eliminated - Question does not mention that the cache stores infrequently requested data unnecessarily.
upvoted 1 times
sumanshu
4 months, 1 week ago
B) Eliminated - Caching is generally designed to improve read query performance by providing faster access to frequently accessed data.
upvoted 1 times
sumanshu
4 months, 1 week ago
C) Looks also valid - https://www.tutorialspoint.com/awselasticache/awselasticache_write_through.htm
upvoted 1 times
...
...
...
Ibrahim24
1 year, 1 month ago
Selected Answer: C
C: The cache will be updated with every change in DB although that data is not being read frequently. This is write-through strategy D cannot be the right answer since the cache is being updated with DB change, not on cache miss
upvoted 2 times
...
SD_CS
1 year, 2 months ago
Selected Answer: C
there would be lot of writes
upvoted 2 times
...
xdkonorek2
1 year, 4 months ago
Selected Answer: A
imo it's A It's only obvious answer, per write you have to read the updated record from database - because not every update has to be a full record, and in relational databases update operation returns number of rows updated, not whole entities. So you have to follow up with a read op B - this behavior isn't defined in the question C - how do we know there is infrequently accessed data at all? how do we know TTL in cache? we don't know D - "cache is updated only after a cache miss" this behavior wasn't defined in a question, cache is updated only on updates regardless of cache key missing or not
upvoted 2 times
...
rcaliandro
1 year, 10 months ago
Selected Answer: C
C, each update to the database is also reversed to the cache (write through instead of lazy loading cache strategy). Given that each update/write to the DB is reversed also to the cache, also for really infrequent data, we have as result a really heavy cache
upvoted 3 times
...
BATSIE
1 year, 11 months ago
D, When the cache cannot find the requested data, it is referred to as a cache miss. In this scenario, after the primary database is updated, the cache is immediately updated. However, if a read query is made and the requested data is not found in the cache, there will be a cache miss, which will cause overhead in the initial response time. The cache will then be updated with the requested data, and subsequent read queries for the same data will be faster because the data is already in the cache.
upvoted 1 times
...
Rpod
2 years ago
C . Cache will become expensive and huge.
upvoted 1 times
...
Syre
2 years ago
Selected Answer: A
Answer here is A. Option C is incorrect because infrequently requested data should not be written to the cache, as this can cause the cache to become bloated and inefficient. Option D is incorrect because the entryPoint parameter is used to specify a command that is run when the container starts, and is not related to passing environment variables to the container.
upvoted 2 times
ics_911
9 months, 2 weeks ago
Buddy you need to study more. Most of your answers were wrong in judgment and explanation.
upvoted 1 times
...
...
Krok
2 years, 1 month ago
Selected Answer: C
C. This is Write Through strategy. As described in the course by Stephane Maarek on Udemi this approach has the following Cons: "Cache churn – a lot of the data will never be read"
upvoted 2 times
...
qiaoli
2 years, 1 month ago
Selected Answer: C
the scenario is about write through, so C. D is about lazy loading, it's not mentioned
upvoted 2 times
...
gaddour_med
2 years, 2 months ago
Selected Answer: C
it can not be D. because strategy used in question is cache is updated for each data update in databae not when cache is missing.
upvoted 2 times
tony554556
2 years, 2 months ago
C is correct, your explanation is very clear. Thanks
upvoted 2 times
...
...
Phinx
2 years, 3 months ago
Selected Answer: C
I would go for C.
upvoted 1 times
...
BobAWS23
2 years, 3 months ago
Selected Answer: C
Elasticache can implement both write-through and lazy loading. The key phrase was: to update the cache immediately following the primary database update. This is write-through. Look at "Cache churn." https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html
upvoted 3 times
PawKam
2 years ago
I don't understand this comment. The "cache churn" it refers clearly states "The disadvantages of write-through are as follows [...] most data is never read, which is a waste of resources.", which points to D.
upvoted 1 times
PawKam
2 years ago
Sorry, my bad, I mixed answers. C seems to be correct. Now I understand this comment.
upvoted 1 times
...
...
...
bearcandy
2 years, 3 months ago
Selected Answer: D
It would be C if it didn't include ElastiCache, this technique is called write-through. As it mentions ElastiCache the technique is Lazy Loading, so the answer is D. Look at the official documentationl: https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html
upvoted 1 times
Phinx
2 years, 3 months ago
Elasticache can do both lazy loading and write-through. The catch here is to "update the cache immediately following the primary database"
upvoted 3 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago