Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 244 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 244
Topic #: 1
[All Professional Machine Learning Engineer Questions]

Your work for a textile manufacturing company. Your company has hundreds of machines, and each machine has many sensors. Your team used the sensory data to build hundreds of ML models that detect machine anomalies. Models are retrained daily, and you need to deploy these models in a cost-effective way. The models must operate 24/7 without downtime and make sub millisecond predictions. What should you do?

  • A. Deploy a Dataflow batch pipeline and a Vertex AI Prediction endpoint.
  • B. Deploy a Dataflow batch pipeline with the Runlnference API, and use model refresh.
  • C. Deploy a Dataflow streaming pipeline and a Vertex AI Prediction endpoint with autoscaling.
  • D. Deploy a Dataflow streaming pipeline with the Runlnference API, and use automatic model refresh.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
fitri001
Highly Voted 7 months, 1 week ago
Selected Answer: D
why D? Real-time Predictions: Dataflow streaming pipelines continuously process sensor data, enabling real-time anomaly detection with sub-millisecond predictions. This is crucial for immediate response to potential machine issues. RunInference API: This API allows invoking TensorFlow models directly within the Dataflow pipeline for on-the-fly inference. This eliminates the need for separate prediction endpoints and reduces latency. Automatic Model Refresh: Since models are retrained daily, automatic refresh ensures the pipeline utilizes the latest version without downtime. This is essential for maintaining model accuracy and anomaly detection effectiveness. Why not C? Dataflow Streaming Pipeline with Vertex AI Prediction Endpoint with Autoscaling: While autoscaling can handle varying workloads, Vertex AI Prediction endpoints might incur higher costs for real-time, high-volume predictions compared to invoking models directly within the pipeline using RunInference.
upvoted 7 times
...
gscharly
Most Recent 7 months, 1 week ago
Selected Answer: D
agree with fitri001
upvoted 1 times
...
pinimichele01
7 months, 3 weeks ago
Selected Answer: D
With the automatic model refresh feature, when the underlying model changes, your pipeline updates to use the new model. Because the RunInference transform automatically updates the model handler, you don't need to redeploy the pipeline. With this feature, you can update your model in real time, even while the Apache Beam pipeline is running.
upvoted 1 times
pinimichele01
7 months, 2 weeks ago
and also ai endpoint not good for online inference
upvoted 1 times
...
...
guilhermebutzke
9 months, 1 week ago
Selected Answer: C
My Answer: C The phrase: “The models must operate 24/7 without downtime and make sub millisecond predictions” configures a case of online prediction (option B or C) The phrase: “Models are retrained daily, and you need to deploy these models in a cost-effective way”, choose between “ Vertex AI Prediction endpoint with autoscaling” instead “Runlnference API, and use automatic model refresh” looks better because always update with retrained models, and the scalability. https://cloud.google.com/blog/products/ai-machine-learning/streaming-prediction-with-dataflow-and-vertex
upvoted 3 times
...
sonicclasps
10 months ago
Selected Answer: C
low latency - > streaming C & D could both work, but C is the GCP solution. So I chose C
upvoted 2 times
asmgi
4 months, 2 weeks ago
I don't think autoscaling is relevant to this task, since we have the same amount of sensors at any time.
upvoted 1 times
...
vaibavi
9 months, 2 weeks ago
i think autoscaling will lead to downtime atleast when the replicas are updating .
upvoted 2 times
pinimichele01
7 months ago
i agree, D is better
upvoted 1 times
...
...
...
b1a8fae
10 months, 1 week ago
Selected Answer: D
Needs to be active 24/7 -> streaming. RunInference API seems like the way to go here, using automatic model refresh on a daily basis. https://beam.apache.org/documentation/ml/about-ml/
upvoted 4 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...