You work for a small company that has deployed an ML model with autoscaling on Vertex AI to serve online predictions in a production environment. The current model receives about 20 prediction requests per hour with an average response time of one second. You have retrained the same model on a new batch of data, and now you are canary testing it, sending ~10% of production traffic to the new model. During this canary test, you notice that prediction requests for your new model are taking between 30 and 180 seconds to complete. What should you do?
sonicclasps
Highly Voted 1 year, 5 months agokirukkuman
Most Recent 1 week, 2 days agoBegum
1 month, 3 weeks agodesertlotus1211
4 months, 1 week agovini123
5 months agopotomeek
6 months agoYushiSato
11 months agoYushiSato
11 months agoAnnaR
1 year, 2 months agopinimichele01
1 year, 3 months agopinimichele01
1 year, 2 months agoVipinSingla
1 year, 3 months agoAastha_Vashist
1 year, 3 months agorajshiv
7 months, 1 week agoCarlose2108
1 year, 4 months agoguilhermebutzke
1 year, 5 months agovaibavi
1 year, 5 months agolunalongo
7 months agob1a8fae
1 year, 6 months agokalle_balle
1 year, 6 months agoedoo
1 year, 4 months ago