Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Cloud Developer All Questions

View all questions & answers for the Professional Cloud Developer exam

Exam Professional Cloud Developer topic 1 question 146 discussion

Actual exam question from Google's Professional Cloud Developer
Question #: 146
Topic #: 1
[All Professional Cloud Developer Questions]

Your company’s product team has a new requirement based on customer demand to autoscale your stateless and distributed service running in a Google Kubernetes Engine (GKE) duster. You want to find a solution that minimizes changes because this feature will go live in two weeks. What should you do?

  • A. Deploy a Vertical Pod Autoscaler, and scale based on the CPU load.
  • B. Deploy a Vertical Pod Autoscaler, and scale based on a custom metric.
  • C. Deploy a Horizontal Pod Autoscaler, and scale based on the CPU toad.
  • D. Deploy a Horizontal Pod Autoscaler, and scale based on a custom metric.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
thewalker
2 months, 2 weeks ago
Selected Answer: C
Minimal Changes: Horizontal Pod Autoscaler (HPA) is a built-in Kubernetes feature that requires minimal configuration. You can quickly enable it and configure it to scale based on CPU utilization, which is a standard metric readily available in Kubernetes. Stateless and Distributed Service: HPA is well-suited for stateless and distributed services. It scales by adding or removing replicas of your service, ensuring that your application remains distributed and handles load efficiently. Two-Week Deadline: HPA is a straightforward solution that can be deployed and configured within a two-week timeframe.
upvoted 1 times
thewalker
2 months, 2 weeks ago
A. Deploy a Vertical Pod Autoscaler, and scale based on the CPU load: Vertical Pod Autoscaler (VPA) scales the resources (CPU and memory) of individual pods, not the number of pods. This might not be the most efficient approach for a stateless and distributed service. B. Deploy a Vertical Pod Autoscaler, and scale based on a custom metric: VPA with custom metrics requires more effort to set up and configure. It's not the most efficient solution for a quick deployment. D. Deploy a Horizontal Pod Autoscaler, and scale based on a custom metric: While HPA with custom metrics can be powerful, it requires more time to set up and configure. For a two-week deadline, using CPU load as a metric is a simpler and faster approach.
upvoted 1 times
...
...
__rajan__
1 year, 2 months ago
Selected Answer: C
C is correct.
upvoted 1 times
...
purushi
1 year, 3 months ago
Selected Answer: C
Since minimum number of changes, I go with C. Scaling based on the custom metrics might take more time compared to built in CPU load metric. Also, we need to see that application is stateless. So simple CPU metric is enough as a scaling parameter.
upvoted 1 times
abhishek_verma1_stl_tech
1 year, 3 months ago
Have you given the exam yet. Are these questions similar to actual exam questions?
upvoted 1 times
...
...
telp
1 year, 10 months ago
Selected Answer: C
A. Incorrect: This doesn’t help with a distributed application. B. Incorrect: This would work, but would require Cloud Monitoring integration and possible application modification. This would also not apply to a distributed application. C. Correct: This will require the least number of changes to the code and fits the requirements. D. Incorrect: This would work, but would require Cloud Monitoring integration and possible application modification.
upvoted 1 times
...
TNT87
1 year, 11 months ago
Answer C Scale based on the percent utilization of CPUs across nodes. This can be cost effective, letting you maximize CPU resource utilization. Because CPU usage is a trailing metric, however, your users might experience latency while a scale-up is in progress.
upvoted 1 times
...
zellck
1 year, 11 months ago
Selected Answer: C
C is the answer. https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster.
upvoted 2 times
...
melisargh
1 year, 11 months ago
Selected Answer: C
AB are wrong because it is recommended to start with HPA if you have nothing D would take time and effort since you have to tune the metric C is right because is the most simple entry level solution for autoscaling due the unknown new requirements
upvoted 3 times
...
TrainingProgram
1 year, 11 months ago
Selected Answer: D
I think D is option.
upvoted 1 times
...
gardislan18
1 year, 11 months ago
Selected Answer: C
there are too many typos here but if it is really typo then the answer is C
upvoted 3 times
ash_meharun
1 year, 11 months ago
Please share your views about why it's not D? Question doesn't say anything about increasing load utilization but about new(addition) requirements.
upvoted 1 times
zellck
1 year, 11 months ago
scaling based on CPU load will be sufficient. you don't need to create custom metric.
upvoted 2 times
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...