exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 257 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 257
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You recently trained an XGBoost model on tabular data. You plan to expose the model for internal use as an HTTP microservice. After deployment, you expect a small number of incoming requests. You want to productionize the model with the least amount of effort and latency. What should you do?

  • A. Deploy the model to BigQuery ML by using CREATE MODEL with the BOOSTED_TREE_REGRESSOR statement, and invoke the BigQuery API from the microservice.
  • B. Build a Flask-based app. Package the app in a custom container on Vertex AI, and deploy it to Vertex AI Endpoints.
  • C. Build a Flask-based app. Package the app in a Docker image, and deploy it to Google Kubernetes Engine in Autopilot mode.
  • D. Use a prebuilt XGBoost Vertex container to create a model, and deploy it to Vertex AI Endpoints.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
pikachu007
Highly Voted 11 months, 2 weeks ago
Selected Answer: D
Prebuilt Container: It eliminates the need to build and manage a custom container, reducing development time and complexity. Vertex AI Endpoints: It provides a managed serving infrastructure with low latency and high availability, optimizing performance for predictions. Minimal Effort: It involves simple steps of creating a Vertex model and deploying it to an endpoint, streamlining the process.
upvoted 7 times
...
b1a8fae
Highly Voted 11 months, 1 week ago
Selected Answer: D
Bit lost here. I would discard buiding a Flask app since that is the opposite of "minimum effort". Between A and D, I guess a prebuilt container (D) involves less effort, but I am not 100% confident.
upvoted 5 times
...
AzureDP900
Most Recent 5 months, 3 weeks ago
Option D is correct : Using a prebuilt XGBoost Vertex container (Option D) is the most straightforward approach. This container is specifically designed for running XGBoost models in production environments and can be easily deployed to Vertex AI Endpoints. This will allow you to expose your model as an HTTP microservice with minimal additional work.
upvoted 1 times
...
fitri001
8 months, 2 weeks ago
Selected Answer: D
Package the Model: Use a library like xgboost-server to create a minimal server for your XGBoost model. This package helps convert your model into a format suitable for serving predictions through an HTTP endpoint. Deploy to Cloud Functions: Deploy the packaged model server as a Cloud Function on Google Cloud Platform (GCP). Cloud Functions are serverless, lightweight execution environments ideal for event-driven applications like microservices. Configure Trigger: Set up an HTTP trigger for your Cloud Function, allowing it to be invoked through HTTP requests.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago