exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 105 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 105
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for a gaming company that develops massively multiplayer online (MMO) games. You built a TensorFlow model that predicts whether players will make in-app purchases of more than $10 in the next two weeks. The model’s predictions will be used to adapt each user’s game experience. User data is stored in BigQuery. How should you serve your model while optimizing cost, user experience, and ease of management?

  • A. Import the model into BigQuery ML. Make predictions using batch reading data from BigQuery, and push the data to Cloud SQL
  • B. Deploy the model to Vertex AI Prediction. Make predictions using batch reading data from Cloud Bigtable, and push the data to Cloud SQL.
  • C. Embed the model in the mobile application. Make predictions after every in-app purchase event is published in Pub/Sub, and push the data to Cloud SQL.
  • D. Embed the model in the streaming Dataflow pipeline. Make predictions after every in-app purchase event is published in Pub/Sub, and push the data to Cloud SQL.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
hiromi
Highly Voted 1 year, 6 months ago
Selected Answer: A
it seens A (not sure) - https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create-tensorflow
upvoted 12 times
...
phani49
Most Recent 2 days, 4 hours ago
Selected Answer: D
Why D is the Best Choice: It provides real-time predictions, which is crucial for a good user experience in an MMO setting. It leverages Google Cloud’s managed services (Dataflow, Pub/Sub, Cloud SQL) to reduce operational overhead and simplify management. It allows you to centrally manage your model and easily update it without requiring changes to client applications. It optimizes cost by using a pay-as-you-go, autoscaling service rather than running large-scale batch jobs or deploying models on individual user devices. Option A: Import model into BigQuery ML and do batch predictions. User Experience: Batch predictions are not real-time. This approach introduces a significant delay between data ingestion and predictions. Not ideal if you need to adapt the user experience quickly based on recent behavior.
upvoted 1 times
...
pinimichele01
2 months, 1 week ago
Selected Answer: A
Make predictions after every in-app purchase it it not necessary -> A
upvoted 2 times
...
Mickey321
7 months, 1 week ago
Selected Answer: D
Embedding the model in a streaming Dataflow pipeline allows low latency predictions on real-time events published to Pub/Sub. This provides a responsive user experience. Dataflow provides a managed service to scale predictions and integrate with Pub/Sub, without having to manage servers. Streaming predictions only when events occur optimizes cost compared to bulk or client-side prediction. Pushing results to Cloud SQL provides a managed database for persistence. In contrast, options A and B use inefficient batch predictions. Option C increases mobile app size and cost.
upvoted 1 times
...
SamuelTsch
11 months, 2 weeks ago
Selected Answer: D
D could be correct
upvoted 1 times
...
Nxtgen
11 months, 3 weeks ago
Selected Answer: D
These were my reasonings to choose D as best option: B -> Vertex AI would not minimize cost C -> Would not optimize user experience (this may lead to slow running of the game (lag)?) A- > Would not optimize ease of management / automatization D -> Best choice?
upvoted 1 times
tavva_prudhvi
7 months, 2 weeks ago
Why do you want to make a prediction after every app purchase bro?
upvoted 3 times
...
...
M25
1 year, 1 month ago
Selected Answer: D
For "used to adapt each user's game experience" points out to non-batch, hence excludes A & B, and embedding the model in the mobile app would not necessarily "optimize cost". Plus, the classical streaming solution builds on Dataflow along with Pub/Sub and BigQuery, embedding ML in Dataflow is low-code https://cloud.google.com/blog/products/data-analytics/latest-dataflow-innovations-for-real-time-streaming-and-aiml and apparently a modified version of the question points to the same direction https://mikaelahonen.com/en/data/gcp-mle-exam-questions/
upvoted 3 times
ciro_li
11 months ago
there's no need to make a prediction after every in-app purchase event. Am i wrong?
upvoted 3 times
...
...
TNT87
1 year, 2 months ago
Selected Answer: A
Yeah its A
upvoted 2 times
...
TNT87
1 year, 3 months ago
Selected Answer: C
Answer C
upvoted 2 times
tavva_prudhvi
1 year, 2 months ago
Option C, embedding the model in the mobile application, can increase the size of the application and may not be suitable for real-time prediction.
upvoted 2 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago