Your team develops services that run on Google Kubernetes Engine. You need to standardize their log data using Google-recommended practices and make the data more useful in the fewest number of steps. What should you do? (Choose two.)
A.
Create aggregated exports on application logs to BigQuery to facilitate log analytics.
B.
Create aggregated exports on application logs to Cloud Storage to facilitate log analytics.
C.
Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs.
D.
Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging.
E.
Mandate the use of the Pub/Sub API to write structured data to Pub/Sub and create a Dataflow streaming pipeline to normalize logs and write them to BigQuery for analytics.
C. Single-line JSON to Cloud Logging: This is the most straightforward and efficient way to standardize logs. By writing logs as single-line JSON, you ensure consistent formatting and make it easy for Cloud Logging to parse and analyze the data. Cloud Logging automatically handles ingestion and storage.
D. Logging API for Structured Logs: Using the Logging API directly allows for more control over log formatting and metadata. You can include specific labels, severity levels, and other information to make your logs more informative. This approach also ensures that logs are written directly to Cloud Logging, eliminating the need for additional processing steps.
Why other options are less ideal:
A. Aggregated Exports to BigQuery: While BigQuery is excellent for analytics, this approach requires additional steps to configure and manage exports. It's not the most efficient way to standardize logs initially.
B. Aggregated Exports to Cloud Storage: Similar to option A, this adds complexity and requires additional processing to analyze the data in Cloud Storage.
E. Pub/Sub and Dataflow: This is a more complex solution that involves multiple services and requires significant development effort. It's overkill for simply standardizing logs.
C. Writing log output to standard output (stdout) as single-line JSON: This is a recommended practice for containerized applications running on Kubernetes. Kubernetes captures everything written to stdout and stderr and routes it to its logging agent (in this case, Cloud Logging in GKE). By structuring logs as single-line JSON, you enable Cloud Logging to ingest them as structured logs, which are more queryable and readable. This approach is efficient and does not require any changes in the application to use specific logging APIs.
A. Create aggregated exports on application logs to BigQuery: Exporting logs to BigQuery allows for powerful analytics capabilities. BigQuery is well-suited for running fast, SQL-like queries on large datasets. By exporting logs to BigQuery, you can perform more complex analyses and gain deeper insights from your log data.
A & C:
Option A to “make the data more useful”, as BigQuery will allow us to use big data analysis capabilities on the stored logs: https://cloud.google.com/logging/docs/export/aggregated_sinks#supported-destinations
Option C to “to standardize their log data” creating structured logs: https://cloud.google.com/kubernetes-engine/docs/concepts/about-logs#best_practices
Option D is also a viable solution but C is preferred, considering the “fewest number of steps” requirement.
Choosing C and D together makes no sense, as both aim to achieve the same goal.
Write log output to standard output (stdout) as single-line JSON:
This practice allows you to use structured logs, specifically in JSON format, making it easier to parse and analyze log data.
Cloud Logging can ingest logs from standard output, and structured logs enhance the usability of log data.
Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging:
Using the Logging API allows your applications to send structured log data directly to Cloud Logging.
Structured logs provide more context and are easier to filter, search, and analyze within Cloud Logging.
Option A: Create aggregated exports on application logs to BigQuery. This will facilitate log analytics by exporting application logs to BigQuery, which is a fully-managed, serverless data warehouse. BigQuery allows you to perform advanced analytics on your log data, including running complex queries and visualizing the results.
Option C: Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs. This approach involves writing log output to standard output in a specific format (single-line JSON) that can be easily ingested by Cloud Logging. By using structured logs, you can take advantage of advanced querying and filtering capabilities provided by Cloud Logging.
CD. Only C and D mentioned Cloud Logging. Other options involve extra steps and won't come out free.
"When you create a new GKE cluster, Cloud Operations for GKE integration with Cloud Logging and Cloud Monitoring is enabled by default."
https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs#:~:text=When%20you%20create%20a%20new%20GKE%20cluster%2C%20Cloud%20Operations%20for%20GKE%20integration%20with%20Cloud%20Logging%20and%20Cloud%20Monitoring%20is%20enabled%20by%20default.
fewest number of steps -> i believe this sentence is the key. option D would take take.
also: https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs#best_practices
Anser is C&D
To standardize log data and make it more useful in the most efficient way, it is recommended to write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs. This method allows for easy and efficient ingestion of structured log data into Cloud Logging, which can then be easily queried and analyzed. Additionally, mandating the use of the Logging API in the application code allows for the writing of structured logs directly from the application code, improving the usability and reliability of the logs.
A and B, which involve creating aggregated exports of log data to either BigQuery or Cloud Storage, are not necessary for standardizing log data. These options may be useful for storing and analyzing log data, but they are not necessary for standardizing the format of the log data. To standardize log data, it is sufficient to write log output to standard output (stdout) as single-line JSON, which can be ingested into Cloud Logging as structured logs.
E, which involves using the Pub/Sub API and creating a Dataflow streaming pipeline to normalize logs and write them to BigQuery for analytics, is a more complex solution that requires more steps and is not necessary for standardizing log data. While this option may be useful for storing and analyzing log data, it is not necessary for standardizing the format of the log data. To standardize log data, it is sufficient to write log output to stdout and use the Logging API to write structured logs to Cloud Logging.
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
p4
Highly Voted 2 years, 10 months agoParagSanyashiv
Highly Voted 2 years, 10 months agothewalker
Most Recent 4 months, 1 week agothewalker
4 months, 1 week agod_ella2001
4 months, 2 weeks agoalpha_canary
7 months, 2 weeks agosantoshchauhan
8 months, 3 weeks agoKadhem
11 months agoKadhem
11 months, 1 week agoKadhem
11 months agoKadhem
11 months agoIF_FI
1 year agowanrltw
1 year agobraska
1 year agowanrltw
1 year ago__rajan__
1 year, 2 months agomaxdanny
1 year, 3 months agozanhsieh
1 year, 5 months agoryuhei
1 year, 6 months agoPime13
1 year, 9 months agoomermahgoub
1 year, 10 months agoomermahgoub
1 year, 10 months agoomermahgoub
1 year, 10 months ago