Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Cloud Database Engineer All Questions

View all questions & answers for the Professional Cloud Database Engineer exam

Exam Professional Cloud Database Engineer topic 1 question 57 discussion

Actual exam question from Google's Professional Cloud Database Engineer
Question #: 57
Topic #: 1
[All Professional Cloud Database Engineer Questions]

You are working on a new centralized inventory management system to track items available in 200 stores, which each have 500 GB of data. You are planning a gradual rollout of the system to a few stores each week. You need to design an SQL database architecture that minimizes costs and user disruption during each regional rollout and can scale up or down on nights and holidays. What should you do?

  • A. Use Oracle Real Application Cluster (RAC) databases on Bare Metal Solution for Oracle.
  • B. Use sharded Cloud SQL instances with one or more stores per database instance.
  • C. Use a Biglable cluster with autoscaling.
  • D. Use Cloud Spanner with a custom autoscaling solution.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
dynamic_dba
Highly Voted 1 year, 8 months ago
D. A SQL database architecture rules out Bigtable. Minimizing costs rules out Oracle RAC. That leaves B and D. B would work with each Cloud SQL instance being dedicated to a few stores which would not impact other Cloud SQL instances already running. The downside is the scaling up/down. To change the vCPU on a Cloud SQL instance requires it to be re-started and that’s disruption. Spanner also doesn’t autoscale by itself, but there’s a tool available for Spanner called Autoscaler which can automate scaling up/down. So on balance, D is the best answer. https://cloud.google.com/spanner/docs/autoscaling-overview
upvoted 11 times
Jason_Cloud_at
9 months ago
i agree with you on ruling out A and C, however cloud spanner is much costlier than cloud SQL , so B woudlnt be the best answer ?
upvoted 1 times
...
...
rglearn
Most Recent 4 weeks, 1 day ago
Selected Answer: D
CloudSQL auto scale up/down causes disruption. in case of cloudspanner it is automatic. Plus Global solution by spanner is big plus. but again technical downside of multi regional spanner is, only one region will be a leader, because of which app from non-leader region may observe bit higher write latency.
upvoted 1 times
...
RaphaelG
10 months, 1 week ago
Selected Answer: B
To me, it is "B" here. A couple of reasons; the statement "minimizes costs and user disruption during each regional rollout" explicitly emphasises that the disruption relates to the rollout not the scale-up or scale-down, and considering it's only a few stores per db, I would presume it would take like a few minutes tops. Lastly, the cost is a bit of a give away as well since Cloud SQL is like twice as cheap (I did some rough estimates recently using europe-west2 as my benchmark).
upvoted 2 times
...
learnazureportal
1 year, 2 months ago
The correct answer is ==> B. Use sharded Cloud SQL instances with one or more stores per database instance.
upvoted 1 times
...
Mitra123
1 year, 4 months ago
B Guys, either B or D. The keyword is :Minimize cost" Although D is the best solution, B is less costly
upvoted 2 times
...
Nirca
1 year, 8 months ago
Selected Answer: D
1. CloudSQL max out at 64TB, so unable to told 100TB of data. https://cloud.google.com/sql/docs/quotas#metrics_collection_limit 2. Scale is done manually on SQL Cloud.
upvoted 3 times
...
zanhsieh
1 year, 9 months ago
Selected Answer: D
CloudSQL max out at 64TB, so unable to told 200 * 500 GB of data. A: No. Oracle RAC cannot scale up or down B: No. Cloud SQL cannot scale up or down manually and the sharded Cloud SQL sound weird, and doesn't meet the "minimize costs and user disruption during each REGIONAL rollout". Also can't break storage limits 64TB. C: No. BigTable handles rational db poorly https://cloud.google.com/sql/docs/quotas#:~:text=Cloud%20SQL%20storage%20limits,core%3A%20Up%20to%203%20TB.
upvoted 2 times
...
JayGeotab
1 year, 10 months ago
Selected Answer: D
CloudSQL max out at 64TB, so unable to told 200 * 500 GB of data. D: choose Spanner
upvoted 2 times
wolfie09
1 year, 5 months ago
it's written 500 GB of data per store and cloud sql instance per 1 or couple of stores, what's so hard to understand?
upvoted 1 times
...
...
marpayer
1 year, 10 months ago
A - No- oracle RAC cannot scale up or down C- No, Big Table is for nonSQL B or D B - Cloud SQL cannot scale up or down manually and the sharded Cloud SQL sound weird, and doesn't meet the "minimize costs and user disruption during each REGIONAL rollout" For my, D is the best option as it
upvoted 1 times
...
chelbsik
1 year, 11 months ago
Selected Answer: B
Cloud SQL sharding looks like a good option since we need to minimize costs and we don't need global scaling https://cloud.google.com/community/tutorials/horizontally-scale-mysql-database-backend-with-google-cloud-sql-and-proxysql
upvoted 4 times
...
pk349
1 year, 11 months ago
B: Use sharded ***** Cloud SQL instances with one or more stores per database instance. Sharding makes horizontal scaling possible by partitioning the database into smaller, more manageable parts (shards), then deploying the parts across a cluster of machines. Data queries are routed to the corresponding server automatically, usually with rules embedded in application logic or a query router.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...