Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 194 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 194
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for a social media company. You want to create a no-code image classification model for an iOS mobile application to identify fashion accessories. You have a labeled dataset in Cloud Storage. You need to configure a training workflow that minimizes cost and serves predictions with the lowest possible latency. What should you do?

  • A. Train the model by using AutoML, and register the model in Vertex AI Model Registry. Configure your mobile application to send batch requests during prediction.
  • B. Train the model by using AutoML Edge, and export it as a Core ML model. Configure your mobile application to use the .mlmodel file directly.
  • C. Train the model by using AutoML Edge, and export the model as a TFLite model. Configure your mobile application to use the .tflite file directly.
  • D. Train the model by using AutoML, and expose the model as a Vertex AI endpoint. Configure your mobile application to invoke the endpoint during prediction.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
forport
4 months ago
Selected Answer: B
'for ios mobile' = edge '.mlmodel directly' = minimizes the cost
upvoted 1 times
...
fitri001
7 months, 1 week ago
Selected Answer: B
No-code Training: AutoML Edge simplifies model training without needing extensive coding knowledge. On-device Processing: Core ML models run directly on the iOS device, minimizing latency by eliminating the need for network calls to a cloud endpoint. Cost-effective: Training on AutoML Edge and deploying the model on the device avoids ongoing costs associated with Vertex AI endpoints.
upvoted 2 times
fitri001
7 months, 1 week ago
A. AutoML with Batch Requests: While AutoML offers powerful model training, batch requests for prediction still incur network latency and might not be ideal for real-time mobile applications. C & D. TFLite and Vertex AI Endpoint: Both TFLite and Vertex AI endpoints are viable options, but they require additional steps for mobile integration compared to Core ML, which is native to iOS. Additionally, a Vertex AI endpoint introduces cloud communication and potential costs.
upvoted 1 times
...
...
pinimichele01
7 months, 3 weeks ago
Selected Answer: B
Core ML is specifically designed for iOS devices, ensuring efficient inference and low latency.
upvoted 1 times
...
guilhermebutzke
9 months, 1 week ago
Selected Answer: B
My Answer: B AutoML Edge or Vertex AI endpoint?: This option is specifically designed for training models that run on edge devices like mobile phones. It optimizes models for size and efficiency, minimizing cost and latency. While AutoML can train the model, using a Vertex AI endpoint adds unnecessary overhead and potential latency for mobile predictions. Batch requests wouldn't significantly improve latency here. Core ML or TFLite: While TFLite is compatible with some mobile platforms, Core ML is specifically designed for iOS and offers better performance and integration.
upvoted 1 times
...
b1a8fae
10 months, 2 weeks ago
Selected Answer: B
B. Confused as AutoML Vision Edge seems like the right tool for this problematic but is deprecated according to docs: https://firebase.google.com/docs/ml/automl-image-labeling I will assume that the question needs updating but we should go with that + core ML is specifically designed for iOS apps. https://www.netguru.com/blog/coreml-vs-tensorflow-lite-mobile
upvoted 1 times
...
BlehMaks
10 months, 2 weeks ago
Selected Answer: B
it's possible to use either Core ML or TF Lite, but since it's necessary to ensure the lowest possible latency, choose Core ML https://cloud.google.com/vertex-ai/docs/export/export-edge-model#classification
upvoted 2 times
...
36bdc1e
10 months, 2 weeks ago
B For no code , automl is the best , for minimizing the cost we export as Core ML model
upvoted 1 times
...
pikachu007
10 months, 2 weeks ago
Selected Answer: B
No-code model development: AutoML Edge provides a no-code interface for model training, aligning with the requirement. Optimized for mobile devices: Core ML is specifically designed for iOS devices, ensuring efficient inference and low latency. Offline capability: The app can run predictions locally without requiring network calls, reducing costs and ensuring availability even without internet connectivity. No ongoing endpoint costs: Unlike using a Vertex AI endpoint, there are no extra costs associated with hosting and serving the model.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...