You work for a small company that has deployed an ML model with autoscaling on Vertex AI to serve online predictions in a production environment. The current model receives about 20 prediction requests per hour with an average response time of one second. You have retrained the same model on a new batch of data, and now you are canary testing it, sending ~10% of production traffic to the new model. During this canary test, you notice that prediction requests for your new model are taking between 30 and 180 seconds to complete. What should you do?
YushiSato
3 months, 2 weeks agoYushiSato
3 months, 2 weeks agoAnnaR
7 months agopinimichele01
7 months, 3 weeks agopinimichele01
7 months, 2 weeks agoVipinSingla
8 months, 1 week agoAastha_Vashist
8 months, 1 week agoCarlose2108
9 months agoguilhermebutzke
9 months, 3 weeks agovaibavi
9 months, 2 weeks agosonicclasps
10 months agob1a8fae
10 months, 3 weeks agokalle_balle
10 months, 3 weeks agoedoo
8 months, 3 weeks ago