A development team at your company has created a dockerized HTTPS web application. You need to deploy the application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically. How should you deploy to GKE?
A.
Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
B.
Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
C.
Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the HTTPS traffic.
D.
Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
"Ingress is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services.
On GKE, Ingress is implemented using Cloud Load Balancing. When you create an Ingress in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application."
Are you exposing multiple services through single IP address? Hence, do you need routing your traffic?
Correct answer is B.
A and B both create under the hood a Service of type LoadBalancer with external IP address. However, when it comes to http(s) traffic an ingress is the way to go because of ssl termination and for the routing options.
C & D is clearly incorrect.
B is incorrect because of this:
"service of type LoadBalancer to load-balance the HTTPS traffic."
GKE Service Load Balancer is L4 Network or Internal Load Balancer, does not support HTTPS traffic.
Thus only A is correct.
B
A. Ingress resource: While Ingress can be used for external load balancing, it often requires additional configuration for HTTPS termination (offloading SSL from your application containers). Additionally, LoadBalancer services typically offer a simpler setup for basic external load balancing without HTTPS termination concerns.
C & D. Compute Engine Instance Group Autoscaling: GKE manages its own nodes separate from Compute Engine instances. Autoscaling on a Compute Engine instance group wouldn't manage the Kubernetes pods or nodes effectively in this scenario.
service loadBalancer: https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer
This page provides a general overview of how Google Kubernetes Engine (GKE) creates and manages Google Cloud load balancers when you apply a Kubernetes LoadBalancer Services manifest. It describes the different types of load balancers and how settings like the externalTrafficPolicy and GKE subsetting for L4 internal load balancers determine how the load balancers are configured. -> l4 tcp/udp not https
Ingress: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress This page provides a general overview of what Ingress for external Application Load Balancers is and how it works. Google Kubernetes Engine (GKE) provides a built-in and managed Ingress controller called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers for HTTP(S) workloads in GKE. -S http(s)
I'm assuming B is the suggested answer because a the question doesn't state that the application should be available externally. Services allow exposing resources internally and to load balancers.
However, it should be A, as the assumption would be a an external web application.
https://cloud.google.com/kubernetes-engine/docs/concepts/service
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
"This page provides a general overview of what Ingress for external Application Load Balancers is and how it works. Google Kubernetes Engine (GKE) provides a built-in and managed Ingress controller called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers for HTTP(S) workloads in GKE."
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
As there is no mention about the type of the traffic, Internal or external - Going with A - Ingress.
Option-C and D are straightforwardly wrong
Between A and B : B is the correct answer, because it makes use of loadbalancing the ingress in K8S native style. That is the reason why cluster scaling is also done.
This is how it should
External Load Balancing Ingress --> K8S Service of type LoadBalancer --> pods that can autoscale
Directly allowing external loadbalcing ingress to autoscaled Pod, doesn't makes sense to use GKE
Ingress is Https while Service is TCP/UDP.
https://cloud.google.com/load-balancing/docs/choosing-load-balancer
https://cloud.google.com/kubernetes-engine/docs/concepts/service-networking
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
crypt0
Highly Voted 5 years, 1 month agotechalik
3 years, 11 months agonitinz
3 years, 8 months agoSmart
4 years, 9 months agoSmart
4 years, 9 months agotartar
4 years, 3 months agoGopiSivanathan
4 years, 1 month agojcmoranp
Highly Voted 5 years agonareshthumma
Most Recent 1 month agomstaicu
4 months, 3 weeks agohuuthanhdlv
6 months agohitmax87
6 months, 1 week agonanasenishino
6 months, 2 weeks agoPime13
9 months, 3 weeks agogun123
10 months, 2 weeks agobandegg
10 months, 3 weeks agoMahAli
11 months, 3 weeks agoAwsSuperTrooper
12 months agothewalker
1 year agoArun_m_123
1 year, 1 month agosomeone2011
1 year, 1 month agoheretolearnazure
1 year, 3 months agowillyf1
1 year, 3 months ago