Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Cloud DevOps Engineer All Questions

View all questions & answers for the Professional Cloud DevOps Engineer exam

Exam Professional Cloud DevOps Engineer topic 1 question 4 discussion

Actual exam question from Google's Professional Cloud DevOps Engineer
Question #: 4
Topic #: 1
[All Professional Cloud DevOps Engineer Questions]

You have a set of applications running on a Google Kubernetes Engine (GKE) cluster, and you are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new containerized application required by your company into production. This application is written by a third party and cannot be modified or reconfigured. The application writes its log information to /var/log/app_messages.log, and you want to send these log entries to Stackdriver Logging. What should you do?

  • A. Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.
  • B. Deploy a Fluentd daemonset to GKE. Then create a customized input and output configuration to tail the log file in the application's pods and write to Stackdriver Logging.
  • C. Install Kubernetes on Google Compute Engine (GCE) and redeploy your applications. Then customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods and write to Stackdriver Logging.
  • D. Write a script to tail the log file within the pod and write entries to standard output. Run the script as a sidecar container with the application's pod. Configure a shared volume between the containers to allow the script to have read access to /var/log in the application container.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
francisco_guerra
Highly Voted 2 months ago
Ans is B Besides the list of default logs that the Logging agent streams by default, you can customize the Logging agent to send additional logs to Logging or to adjust agent settings by adding input configurations. The configuration definitions in these sections apply to the fluent-plugin-google-cloud output plugin only and specify how logs are transformed and ingested into Cloud Logging.
upvoted 6 times
...
muhasinem
Highly Voted 3 years, 4 months ago
B is correct.
upvoted 6 times
...
mohan999
Most Recent 1 week, 6 days ago
I think it should be D, because its says you have a set of applications already running on a Google Kubernetes Engine (GKE) cluster, and you are using Stackdriver Kubernetes Engine Monitoring. I believe we should not enable the workload logging if we were to use the custom agent - So, now just for one new application, it does not make any sense to disable the default stackdriver setting on GKE and use the custom agent for all application logging. So, D would be correct option in this context since we are only customizing this for one application.
upvoted 1 times
...
WakandaF
1 month, 3 weeks ago
I believe that is B, This tutorial describes how to customize Fluentd logging for a Google Kubernetes Engine cluster. You'll learn how to host your own configurable Fluentd daemonset to send logs to Cloud Logging, instead of selecting the cloud logging option when creating the Google Kubernetes Engine (GKE) cluster, which does not allow configuration of the Fluentd daemon
upvoted 4 times
...
DoodleDo
1 month, 3 weeks ago
Ans B. as the application is not writing to STDOUT or STDERR. Source for the answer - https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs Extract from this source relevant for the answer. GKE's default logging agent provides a managed solution to deploy and manage the agents that send the logs for your clusters to Cloud Logging. Depending on your GKE cluster master version, either fluentd or fluentbit are used to collect logs. Starting from GKE 1.17, logs are collected using a fluentbit-based agent. GKE clusters using versions prior to GKE 1.17 use a fluentd-based agent. If you want to alter the default behavior of the fluentdagents, then you can run a customized fluentd agent or a customized fluentbit agent. Common use cases include: removing sensitive data from your logs collecting additional logs not written to STDOUT or STDERR using specific performance-related settings customized log formatting
upvoted 1 times
...
zellck
1 month, 3 weeks ago
Selected Answer: B
B is the answer. https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs#custom_agents GKE's default logging agent provides a managed solution to deploy and manage the agents that send the logs for your clusters to Cloud Logging. Depending on your GKE cluster master version, either fluentd or fluentbit are used to collect logs. Common use cases include: - collecting additional logs not written to STDOUT or STDERR - customized log formatting
upvoted 4 times
...
JonathanSJ
1 month, 3 weeks ago
Selected Answer: B
B. Deploy a Fluentd daemonset to GKE. Then create a customized input and output configuration to tail the log file in the application's pods and write to Stackdriver Logging. Stackdriver Kubernetes Engine Monitoring provides log collection and analysis for Kubernetes clusters running on GKE out of the box, but the default configuration doesn't include the ability to tail a specific log file. To collect log entries written to /var/log/app_messages.log, you can deploy a Fluentd daemonset to your GKE cluster. Fluentd is a log collector and forwarder that can be configured to tail a specific log file, in this case, /var/log/app_messages.log and send the log entries to Stackdriver Logging. By deploying a Fluentd daemonset, you can create a customized input and output configuration, you can use this configuration to tail the log file in the application's pods and write to Stackdriver Logging, this allows you to collect the logs from that specific application, ensuring that the logs are going to stackdriver and can be analyzed later on.
upvoted 1 times
JonathanSJ
1 year, 10 months ago
Option D is NOT the ideal choice as it would require creating and maintaining a custom script that would have to be deployed with the application's pod in order to tail the log file and write the entries to standard output. This would mean that you would need to take care of the maintenance of this script on every new deployment, update or scaling of the pod, and also the script would need to handle all the edge cases, errors and permissions issues by your own. Furthermore, this option doesn't use the Stackdriver Logging service, the script would be writing to standard output, which may not be as reliable and secure as writing to a centralized logging service such as Stackdriver Logging. This would also require additional setup and maintenance to route the standard output to a location where the logs can be analyzed, this would take additional development and maintenance effort that would be redundant when compared to the other options.
upvoted 3 times
...
...
Watcharin_start
1 month, 3 weeks ago
Ans. is B. This solution is want to send log msg to the Stackdriver logging. However, C is possible IF we have a third party log collector like FluentD that monitoring STDOUT log from the cluster, but this question does not. Therefore, the answer must is B, because any content of container that write in files, we can scrape it in hosted worker node. It store in path /var/containers/* . If we create a customize tail path in the FluentD to monitor any path that store log content, we could use a STAR(*) operator to tell a plugins to find all folders that contain file named app_message.log .
upvoted 1 times
...
galkin
2 months ago
Selected Answer: D
Sidecar is expected here
upvoted 2 times
...
dija123
6 months, 3 weeks ago
Selected Answer: B
Deploy a Fluentd daemonset to GKE
upvoted 1 times
...
Jason_Cloud_at
1 year, 1 month ago
Selected Answer: B
To collect log entries from a specific file within each node in your GKE cluster, you can use a DaemonSet, Fluentd is often used as the logging agent for log forwarding in GKE.
upvoted 2 times
...
singularis
1 year, 10 months ago
Selected Answer: D
The answer is D. Fluent D cannot read logs from files. https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent By having your sidecar containers write to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or journald. Each sidecar container prints a log to its own stdout or stderr stream.
upvoted 3 times
...
chelbsik
1 year, 11 months ago
Selected Answer: B
Submitted B
upvoted 1 times
...
hanweiCN
1 year, 11 months ago
i will go with B. flentd daemonset cannot collect logs from other pods , it collects logs from host filesystem ( stdout, stderr) https://cloud.google.com/architecture/best-practices-for-operating-containers#use_the_native_logging_mechanisms_of_containers
upvoted 1 times
...
notjoost
1 year, 11 months ago
Selected Answer: D
I think this should be D instead of B, and here's why: Installing a Fluentd daemonset will not solve your problem, as that logfile is not accessible by the Fluent pods. You'd need a sidecar to make the logfile accessible to another container, which will then be responsible for forwarding the logs to stdout. A daemonset will not do that, but a sidecar tailing the logs will.
upvoted 1 times
...
AzureDP900
2 years, 1 month ago
B is correct https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd
upvoted 2 times
...
dobby_elf
2 years, 3 months ago
Selected Answer: D
D - Installing Fluentd is overkill
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...