Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Cloud Architect All Questions

View all questions & answers for the Professional Cloud Architect exam

Exam Professional Cloud Architect topic 1 question 67 discussion

Actual exam question from Google's Professional Cloud Architect
Question #: 67
Topic #: 1
[All Professional Cloud Architect Questions]

You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application. Which set of steps should you take?

  • A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3. Restart the instances to automatically deploy new production releases.
  • B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new production releases.
  • C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to 'IfNotPresent' in the staging namespace, and then promote it to the production namespace after testing.
  • D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image from the master branch with all of the dependencies, and tag it with 'latest'. 3. Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to 'Always'. Restart the pods to automatically deploy new production releases.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
jcmoranp
Highly Voted 5 years ago
C is correct, need "ifnotpresent"when uploads to container registry
upvoted 40 times
medi01
1 year, 7 months ago
ifnotpresent won't pull new version.
upvoted 4 times
...
heretolearnazure
1 year, 3 months ago
yes i agree
upvoted 1 times
...
...
TosO
Highly Voted 5 years ago
C is the best choice. You can create a k8s cluster with just one node and use a different namespaces for staging and production. In staging, you will test the changes
upvoted 23 times
AzureDP900
2 years, 1 month ago
Agreed
upvoted 1 times
...
...
nareshthumma
Most Recent 1 month ago
Answer is C
upvoted 1 times
...
44fa527
3 months ago
Selected Answer: C
should be option C because if you are working in real world, GKE is the best solution for such a case. Furthermore, its reliable, scalable, flexible, at least the best option among the other three.
upvoted 1 times
...
cai_engineer
3 months ago
Selected Answer: A
Ngl it's A. Don't use GKE, it won't schedule the deployment as most of the resources already occupied by kube-system
upvoted 1 times
cai_engineer
3 months ago
Also you can deploy COS Containerd in a VM
upvoted 1 times
...
...
awsgcparch
4 months ago
Selected Answer: D
imagePullPolicy: Always ensures that the latest version of the image is always pulled, which guarantees that the most recent code is deployed. Restarting pods ensures that the new version is deployed without requiring manual intervention.
upvoted 1 times
...
krokskan
8 months, 3 weeks ago
Selected Answer: B
B because Kubernetes will be overkill and A is not reliable
upvoted 1 times
...
Gall
9 months, 3 weeks ago
Selected Answer: C
A is wrong as after the restart the script will be rerun and fetch the code directly from the repo (even if production). The load of the massive number of dependencies will take a lot of timee, and the application version will be fuzzy.
upvoted 1 times
...
moumou
10 months ago
C is correct, B (instance template cannote be updated once created.
upvoted 2 times
...
kip21
10 months, 1 week ago
C - Correct
upvoted 1 times
...
AWS_Sam
10 months, 2 weeks ago
The correct answer is C. Because it is the only option that RELIABLY tests the app in staging before it is applied to production. Remember that one of the requirements in the question is to reliably deploy the app.
upvoted 1 times
...
Roro_Brother
11 months, 1 week ago
Selected Answer: A
You don't need GKE for 0.1 CPU, only A meet hte needs
upvoted 3 times
...
MahAli
11 months, 3 weeks ago
Selected Answer: A
For 0.1 CPU I will never use GKE, considering the associated cost with control plane and not even one option in the question mentioning micro instances for the node pool
upvoted 4 times
...
mastrrrr
1 year ago
Selected Answer: A
When we read the question - "0.1 CPU cores and 128 MB of memory" to operate in production. You want to monitor and "maximize machine utilization"... Answer A should be a fit based on the question details. Would GKE for tiny application be overkill?
upvoted 4 times
...
Arun_m_123
1 year, 1 month ago
Selected Answer: C
python app on compute engine is a disastrous architecture. C is the correct architecture which tests the app before putting to prod
upvoted 1 times
...
AdityaGupta
1 year, 1 month ago
Selected Answer: C
You should use GKE, because your can scale up and down based on your demand. Also you can specifiy the resource size like 0.1 CPU and 128 MB of memory per Pod. Secondly, Kubernetes Deployment with the imagePullPolicy set to “IfNotPresent” in the staging namespace, and then promote it to production namespace after testing. is best practice.
upvoted 7 times
A21325412
1 year ago
Ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Correct. Because we can spec the resources on our pods is why C is chosen over A (f1-micro). This is what allows us to "maximize machine utilization"!
upvoted 2 times
...
...
ghitesh
1 year, 3 months ago
Selected Answer: A
f1 micro managed instance group fits all the requirements, with capability to run upto 2 instances per per for given requirement GKE cluster would be an overkill, with control plane nodes/cost not even being considered. n1-standard-1 instance would require 10 application instances/pod to be running (assuming 0 ds) and would still be leaving 75% memory unused
upvoted 8 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...