Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Cloud Architect All Questions

View all questions & answers for the Professional Cloud Architect exam

Exam Professional Cloud Architect topic 1 question 19 discussion

Actual exam question from Google's Professional Cloud Architect
Question #: 19
Topic #: 1
[All Professional Cloud Architect Questions]

The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk.
What should they change to get better performance from this system?

  • A. Increase the virtual machine's memory to 64 GB
  • B. Create a new virtual machine running PostgreSQL
  • C. Dynamically resize the SSD persistent disk to 500 GB
  • D. Migrate their performance metrics warehouse to BigQuery
  • E. Modify all of their batch jobs to use bulk inserts into the database
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
shandy
Highly Voted 4 years, 12 months ago
Answer is C because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.
upvoted 67 times
...
Eroc
Highly Voted 5 years ago
Assuming that the database is approaching its hardware limits... both options A and C would improve performance, A would increase number of CPUs and memory, but C would increase memory by more. If it a software problem, it is likly it is a hashing problem (the search and sort algorithms are not specific enough to search within the database). This problem would not be fixed just by migrating to PostgreSQL or BigQuery but modifying the inserts would help the situation because it would entail specifications of data lookups. However, it wouldn't help with search performance just inserts and it doesn't help in normalization. So B, D, and E are eliminated. Since statistics is based on sets, the larger the number of sets the better the predictions. This means that the largest amount of memory would not only increase computer performance but also knowledge enhancements. So C beats A.
upvoted 34 times
nitinz
3 years, 8 months ago
C. universal truth - OLTP D/B performance is depended on IOPs. SSD is the best solution for higher IOPs. In GCP bigger the disk size higher the IOPs.
upvoted 8 times
...
trainor
3 years, 11 months ago
Also, if you increased the memory size, it would not be a n1-standard-8 anymore. You should eventually change machine type, not simply increase memory.
upvoted 6 times
...
tartar
4 years, 3 months ago
C is ok.
upvoted 8 times
...
haroldbenites
2 years, 11 months ago
When you increase the memory yo need to shutdown the machine, but when you increase the disk, it is not necessary. Answer is B.
upvoted 2 times
Ric350
3 months ago
Repectfully, this isn't accurate. On Google Compute Engine, you can often increase the memory of a running virtual machine without needing to shut it down. This is known as live migration or memory hot-add.
upvoted 1 times
...
Mission94
5 months, 1 week ago
Since its using SQL, so there will be a Maintenance Window, so this change can be implemented during the downtime( also there is no mention that the system should be always avaliable)
upvoted 1 times
...
Dclaiborne41
2 years, 6 months ago
there isn't "without downtime"
upvoted 1 times
...
...
...
Ekramy_Elnaggar
Most Recent 1 week, 5 days ago
Selected Answer: C
OLTP D/B performance is depended on IOPs. SSD is the best solution for higher IOPs. In GCP bigger the disk size higher the IOPs.
upvoted 1 times
...
Hungdv
3 months, 2 weeks ago
Choose C
upvoted 1 times
...
ukivanlamlpi
4 months, 2 weeks ago
Selected Answer: A
increase size will not increase performance, it either increase RAM or serverless. A or D. if no cost concern will pick D
upvoted 2 times
...
ashishdwi007
10 months ago
Selected Answer: C
I was looking for CloudSQL in options, since it is not there, C is best
upvoted 1 times
...
hzaoui
10 months, 2 weeks ago
Selected Answer: E
The fact that the database is used for importing and normalizing performance statistics suggests frequent data insertions. Optimizing this process through bulk inserts directly addresses a likely performance bottleneck.
upvoted 1 times
...
JohnDohertyDoe
10 months, 2 weeks ago
Selected Answer: A
The answer according to Google is A. This question is part of the Google's sample questions for the certification.
upvoted 1 times
ccpmad
5 months, 2 weeks ago
Yes, it is, and says it is C.
upvoted 1 times
...
...
JPA210
1 year, 1 month ago
I see most of the people here replying C, but I do not think that the size of the disk we bring much gains in performance. D, yes, seems to me that will bring much improvements in performance, management, operations and cost. So B, Migrate their performance metrics warehouse to BigQuery
upvoted 3 times
...
Palan
1 year, 3 months ago
I would go with option E because Bulk Insert improves performance drastically unless it is been implemented already.
upvoted 1 times
...
eka_nostra
1 year, 4 months ago
Selected Answer: C
Increasing disk size will also increase its performance. https://cloud.google.com/compute/docs/disks/performance#optimize_disk_performance
upvoted 4 times
...
JC0926
1 year, 7 months ago
Selected Answer: C
C. Dynamically resize the SSD persistent disk to 500 GB By increasing the size of the SSD persistent disk, the database server can achieve better performance. A larger SSD persistent disk provides higher IOPS (input/output operations per second) and throughput, allowing for faster read and write operations. This can help improve the performance of the MySQL database server running on the Google Compute Engine instance.
upvoted 3 times
...
mifrah
1 year, 8 months ago
On another website I found the question with the hint "you are not allowed to reboot the VM before next maintenance window". That makes it more clear --> C.
upvoted 1 times
...
JC0926
1 year, 8 months ago
Selected Answer: E
E. Modify all of their batch jobs to use bulk inserts into the database: This can be a very effective solution for improving performance. Bulk inserts can greatly reduce the number of round-trips to the database, which can help to minimize latency and improve overall throughput. Therefore, option E is the best choice for improving performance in this scenario.
upvoted 3 times
...
Jackalski
1 year, 11 months ago
Selected Answer: D
in option C - even increasing disc can gain performance - that will take few months to face new limits. mySQL is not desiged for OLAP/analytics - but OLTP. so I vote on D
upvoted 3 times
...
AniketD
2 years ago
Selected Answer: C
Correct answer is C. Increased disk capacity improved I/O and direct impacts the performance
upvoted 2 times
...
BobLoblawsLawBlog
2 years, 1 month ago
Selected Answer: C
C, because... N1 8cpu max IOPS = 15,000 https://cloud.google.com/compute/docs/disks/performance#n1_vms SSD persistent disks can reach up to 30 IOPS per GB of disk. https://cloud.google.com/compute/docs/disks/performance#example 80 GB X 30 IOPS = 2,400 IOPS 500 GB (answer C) X 30 IOPS = 15,000 IOPS = N1 8 cpu max IOPS
upvoted 11 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...