Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 8 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 8
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for a public transportation company and need to build a model to estimate delay times for multiple transportation routes. Predictions are served directly to users in an app in real time. Because different seasons and population increases impact the data relevance, you will retrain the model every month. You want to follow Google-recommended best practices. How should you configure the end-to-end architecture of the predictive model?

  • A. Configure Kubeflow Pipelines to schedule your multi-step workflow from training to deploying your model.
  • B. Use a model trained and deployed on BigQuery ML, and trigger retraining with the scheduled query feature in BigQuery.
  • C. Write a Cloud Functions script that launches a training and deploying job on AI Platform that is triggered by Cloud Scheduler.
  • D. Use Cloud Composer to programmatically schedule a Dataflow job that executes the workflow from training to deploying your model.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Paul_Dirac
Highly Voted 3 years, 5 months ago
Answer: A A. Kubeflow Pipelines can form an end-to-end architecture (https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/) and deploy models. B. BigQuery ML can't offer an end-to-end architecture because it must use another tool, like AI Platform, for serving models at the end of the process (https://cloud.google.com/bigquery-ml/docs/export-model-tutorial#online_deployment_and_serving). C. Cloud Scheduler can trigger the first step in a pipeline, but then some orchestrator is needed to continue the remaining steps. Besides, having Cloud Scheduler alone can't ensure failure handling during pipeline execution. D. A Dataflow job can't deploy models, it must use AI Platform at the end instead.
upvoted 40 times
q4exam
3 years, 2 months ago
Dataflow can deploy model .... this is how you do stream inference on stream
upvoted 1 times
lordcenzin
2 years, 9 months ago
yes you can but it is not supposed to do that. DF is for data processing and transformation. you would loose all shenanigans kubeflow provide as native. Among the two answers, i think A is the most correct
upvoted 2 times
...
mousseUwU
3 years, 1 month ago
Please send a source link?
upvoted 1 times
...
...
mousseUwU
3 years, 1 month ago
I guess it's A
upvoted 3 times
...
...
gcp2021go
Highly Voted 3 years, 5 months ago
the answer is D. found similar explaination in this course. open for discussion. I found B could also work, but the question asked for end-to end, thus I choose D in stead of B https://www.coursera.org/lecture/ml-pipelines-google-cloud/what-is-cloud-composer-CuXTQ
upvoted 11 times
tavva_prudhvi
1 year, 8 months ago
D is incorrect. Cloud Composer is a fully managed workflow orchestration service built on Apache Airflow. It is a recommended way by Google to schedule continuous training jobs. But it isn’t used to run the training jobs. AI Platform is used for training and deployment.
upvoted 1 times
...
...
harithacML
Most Recent 2 months ago
Selected Answer: A
Req: retrain the model every month+ Google-recommended best practice+ end-to-end architecture A. Configure Kubeflow Pipelines to schedule your multi-step workflow from training to deploying your model. : Supports all above B. Use a model trained and deployed on BigQuery ML, and trigger retraining with the scheduled query feature in BigQuery : Why BigQuery ML when vertexAI/kubflow can handle end to end. BigQuery ML+ traigger only initiate the code run. C. Write a Cloud Functions script that launches a training and deploying job on AI Platform that is triggered by Cloud Scheduler. : Not recommended by google for end to end ML D. Use Cloud Composer to programmatically schedule a Dataflow job that executes the workflow from training to deploying your model. : Not recommended by google for end to end ML. what if model fails? matrix monitor?
upvoted 1 times
...
PhilipKoku
5 months, 3 weeks ago
Selected Answer: A
A) Kubeflow Pipelines is the answer.
upvoted 1 times
...
fragkris
11 months, 4 weeks ago
Selected Answer: A
Chose A
upvoted 1 times
...
Sum_Sum
1 year ago
Selected Answer: A
D - Dataflow job can't deploy models B,C are not - are not complete solutions leaving A to be the correct one
upvoted 1 times
...
suranga4
1 year, 2 months ago
Answer is A
upvoted 1 times
...
M25
1 year, 6 months ago
Selected Answer: A
Went with A
upvoted 1 times
...
John_Pongthorn
1 year, 9 months ago
Selected Answer: A
A : Yet the newer is Vertext-AI Pipeline built on Kubeflow
upvoted 1 times
...
Fatiy
1 year, 9 months ago
Selected Answer: A
A : In this case, it would be a good fit as you need to retrain your model every month, which can be automated with Kubeflow Pipelines. This makes it easier to manage the entire process, from training to deploying, in a streamlined and scalable manner.
upvoted 1 times
...
EFIGO
2 years ago
Selected Answer: A
A is correct All the options get you to the required result, but only A follows the Google-recommended best practices
upvoted 1 times
...
abhi0706
2 years ago
Answer is A: Kubeflow Pipelines can form an end-to-end architecture
upvoted 1 times
...
GCP72
2 years, 3 months ago
Selected Answer: A
Correct answer is "A"
upvoted 1 times
...
caohieu04
2 years, 9 months ago
Selected Answer: A
Community vote
upvoted 2 times
...
lordcenzin
2 years, 9 months ago
Selected Answer: A
A for me too. KF provides all the end2end tools to perform what is asked
upvoted 2 times
...
gcper
3 years, 2 months ago
A Kubeflow can handle all of those things, including deploying to a model endpoint for real-time serving.
upvoted 2 times
...
celia20200410
3 years, 4 months ago
ANS: A https://medium.com/google-cloud/how-to-build-an-end-to-end-propensity-to-purchase-solution-using-bigquery-ml-and-kubeflow-pipelines-cd4161f734d9#75c7 To automate this model-building process, you will orchestrate the pipeline using Kubeflow Pipelines, ‘a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers.’
upvoted 6 times
q4exam
3 years, 2 months ago
I think both A and D are correct because it is just different fashion of doing ML ...
upvoted 1 times
ms_lemon
3 years, 1 month ago
But D doesn't follow Google best practices
upvoted 2 times
george_ognyanov
3 years, 1 month ago
Answer seems to be A really. Here is a link from Google-recommended best practices. They are talking about Vertex AI Pipelines, which are essentially Kubeflow. https://cloud.google.com/architecture/ml-on-gcp-best-practices?hl=en#machine-learning-workflow-orchestration
upvoted 3 times
...
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...