Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 248 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 248
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You trained a model packaged it with a custom Docker container for serving, and deployed it to Vertex AI Model Registry. When you submit a batch prediction job, it fails with this error: "Error model server never became ready. Please validate that your model file or container configuration are valid. " There are no additional errors in the logs. What should you do?

  • A. Add a logging configuration to your application to emit logs to Cloud Logging
  • B. Change the HTTP port in your model’s configuration to the default value of 8080
  • C. Change the healthRoute value in your model’s configuration to /healthcheck
  • D. Pull the Docker image locally, and use the docker run command to launch it locally. Use the docker logs command to explore the error logs
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
wences
2 months, 1 week ago
Selected Answer: B
From StackOverflow:"Validate the container configuration port; it should use port 8080. This configuration is important because Vertex AI sends liveness checks, health checks, and prediction requests to this port on the container. " Pulling the container to the local machine is like stepping back and saying, "It works on my computer," then solving the problem as it arises.
upvoted 1 times
...
fitri001
7 months, 1 week ago
Selected Answer: D
Isolating the Issue: Running the container locally helps determine if the problem originates from the container configuration or the Vertex AI deployment environment. If the container runs successfully locally, the issue likely lies with Vertex AI. Detailed Error Messages: Examining the container logs using docker logs provides detailed error messages specific to the container startup process. These messages can pinpoint the root cause of the model server failure, such as missing dependencies, incorrect model format, or resource limitations.
upvoted 1 times
...
omermahgoub
7 months, 2 weeks ago
Selected Answer: D
I vote for D. Pull the Docker image locally, and use the docker run command to launch it locally. Use the docker logs command to explore the error logs. Here's why: 1. Local Testing by running the Docker image locally to replicate the environment the model server encounters within Vertex AI. 2. Using docker logs allows to inspect the detailed error messages generated by the model server during startup. These logs might provide specific clues about the cause of the "model server never became ready" error.
upvoted 1 times
...
CMMC
8 months, 2 weeks ago
Selected Answer: B
When deploying a custom container to Vertex AI Model Registry, need to follow some requirements for the container configuration. One of these requirements is to use the HTTP port 8080 forserving predictions. If using a different port, the model server might not be able to communicate with Vertex AI and cause the error “Error model server never became ready”. To fix this error, change the HTTP port in your model’s configuration to the default value of 8080 and redeploy the container.
upvoted 1 times
...
guilhermebutzke
9 months, 1 week ago
Selected Answer: D
My Answer: D A: Not correct: While logging can be helpful for monitoring and debugging, it won't directly address the issue of the model server not becoming ready. B: Not correct: The error message doesn't indicate a port issue, changing it preemptively might not resolve the underlying problem. C: Not correct: changing the health route, which could be helpful if the issue is related to health checks, but without further information, it's not the most conclusive option. D: CORRECT: This option allows you to simulate the deployment environment locally and inspect the logs directly, which can help diagnose the issue with the model server not becoming ready.
upvoted 2 times
...
Yan_X
9 months, 3 weeks ago
Selected Answer: C
Due to Model size or other reasons so that it cannot pass health check before timeout. https://cloud.google.com/knowledge/kb/unable-to-deploy-a-large-model-into-a-vertex-endpoint-000010439
upvoted 1 times
Yan_X
8 months, 3 weeks ago
I would revise my answer to D, as healthRoute should be defaulted to /healthcheck.
upvoted 2 times
...
...
vaibavi
9 months, 3 weeks ago
Selected Answer: B
Validate the container configuration port, it should use port 8080. This configuration is important because Vertex AI sends liveness checks, health checks, and prediction requests to this port on the container. https://www.appsloveworld.com/coding/flask/15/vertex-ai-deployment-failed
upvoted 1 times
...
sonicclasps
9 months, 4 weeks ago
Selected Answer: C
when not specifying the health check, the endpoint uses a default health check which only indicates if the http server is ready, not if the model is ready. https://cloud.google.com/vertex-ai/docs/predictions/custom-container-requirements#health
upvoted 2 times
...
pikachu007
10 months, 2 weeks ago
Selected Answer: D
Option A: Adding logging to Cloud Logging is useful for long-term monitoring but might not provide immediate insights for this specific error. Options B and C: Changing port and health check configuration might be necessary if incorrect, but local debugging often reveals the root cause more effectively.
upvoted 4 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...