exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 49 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 49
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for an online travel agency that also sells advertising placements on its website to other companies. You have been asked to predict the most relevant web banner that a user should see next. Security is important to your company. The model latency requirements are 300ms@p99, the inventory is thousands of web banners, and your exploratory analysis has shown that navigation context is a good predictor. You want to Implement the simplest solution. How should you configure the prediction pipeline?

  • A. Embed the client on the website, and then deploy the model on AI Platform Prediction.
  • B. Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.
  • C. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user's navigation context, and then deploy the model on AI Platform Prediction.
  • D. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and for reading the user's navigation context, and then deploy the model on Google Kubernetes Engine.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Paul_Dirac
Highly Voted 3 years, 8 months ago
Security => not A. B: doesn't handle processing with banner inventory. D: deployment on GKE is less simple than on AI Platform. Besides, MemoryStore is in-memory while banners are stored persistently. Ans: C
upvoted 13 times
pinimichele01
11 months, 4 weeks ago
B: doesn't handle processing with banner inventory ---> not true...
upvoted 2 times
...
...
Celia20210714
Highly Voted 3 years, 9 months ago
ANS: C GAE + IAP https://medium.com/google-cloud/secure-cloud-run-cloud-functions-and-app-engine-with-api-key-73c57bededd1 Bigtable at low latency https://cloud.google.com/bigtable#section-2
upvoted 8 times
...
AB_C
Most Recent 4 months, 3 weeks ago
Selected Answer: B
B - right answer
upvoted 1 times
...
ccb23cc
10 months ago
Selected Answer: C
They affirm that navigation context is a good predictor for your model. Therefore you need to be able to perform the prediction and write the new context (if you get more data you will get a better model) and read (to use it for your prediction). On one hand, BigQuery is a OLAP method so for writings and readings could take it around 2 seconds. On the other hand, BigTable is a OLTP method and can make writings and readings in about 9 milliseconds Conclusion: As one of the requerements is that the latency requirements have to be below 300ms your only choice is using BigTable https://galvarado.com.mx/post/comparaci%C3%B3n-de-bases-de-datos-en-google-cloud-datastore-vs-bigtable-vs-cloud-sql-vs-spanner-vs-bigquery/
upvoted 2 times
...
PhilipKoku
10 months, 2 weeks ago
Selected Answer: C
C) Big Table for low latency
upvoted 2 times
...
AnnaR
11 months, 3 weeks ago
Selected Answer: B
Was torn between B and C, but decided for B, because the question states how we should configure the PREDICTION pipeline! Since the exploratory analysis already identified navigation context as good predictor, the focus should be on the prediction model itself.
upvoted 4 times
...
gscharly
12 months ago
Selected Answer: C
agree with Paul_Dirac
upvoted 2 times
...
rightcd
1 year, 1 month ago
look at Q80
upvoted 3 times
...
Sum_Sum
1 year, 5 months ago
Selected Answer: B
I was torn between B and C. But I really don't see the need for a DB
upvoted 3 times
...
Mickey321
1 year, 5 months ago
Selected Answer: B
Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.
upvoted 1 times
...
harithacML
1 year, 9 months ago
Selected Answer: B
secuirity (gateway) + Simplest(ai, not DB)
upvoted 1 times
...
Liting
1 year, 9 months ago
Selected Answer: C
Bigtable is recommended for storage in the case scenario.
upvoted 2 times
...
tavva_prudhvi
1 year, 9 months ago
Selected Answer: C
B is also a possible solution, but it does not include a database for storing and retrieving the user's navigation context. This means that every time a user visits a page, the gateway would need to query the website to retrieve the navigation context, which could be slow and inefficient. By using Cloud Bigtable to store the navigation context, the gateway can quickly retrieve the context from the database and pass it to the model for prediction. This makes the overall prediction pipeline more efficient and scalable. Therefore, C is a better option compared to B.
upvoted 6 times
...
friedi
1 year, 10 months ago
Selected Answer: B
B is correct, C introduces computational overhead, unnecessarily increasing serving latency.
upvoted 1 times
...
Voyager2
1 year, 10 months ago
Selected Answer: C
C. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and for reading the user's navigation context, and then deploy the model on AI Platform Prediction https://cloud.google.com/architecture/minimizing-predictive-serving-latency-in-machine-learning#choosing_a_nosql_database Typical use cases for Bigtable are: * Ad prediction that leverages dynamically aggregated values over all ad requests and historical data.
upvoted 2 times
...
CloudKida
1 year, 11 months ago
Selected Answer: C
Bigtable is a massively scalable NoSQL database service engineered for high throughput and for low-latency workloads. It can handle petabytes of data, with millions of reads and writes per second at a latency that's on the order of milliseconds. Typical use cases for Bigtable are: Fraud detection that leverages dynamically aggregated values. Applications in Fintech and Adtech are usually subject to heavy reads and writes. Ad prediction that leverages dynamically aggregated values over all ad requests and historical data. Booking recommendation based on the overall customer base's recent bookings.
upvoted 2 times
...
M25
1 year, 11 months ago
Selected Answer: C
Went with C
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago