A Structured Streaming job deployed to production has been resulting in higher than expected cloud storage costs. At present, during normal execution, each microbatch of data is processed in less than 3s; at least 12 times per minute, a microbatch is processed that contains 0 records. The streaming write was configured using the default trigger settings. The production job is currently scheduled alongside many other Databricks jobs in a workspace with instance pools provisioned to reduce start-up time for jobs with batch execution.
Holding all other variables constant and assuming records need to be processed in less than 10 minutes, which adjustment will meet the requirement?
pk07
1 month, 3 weeks agopracticioner
3 months, 1 week agoEr5
7 months, 3 weeks agovikram12apr
8 months, 2 weeks agohidelux
8 months, 3 weeks agopracticioner
3 months, 1 week agospaceexplorer
10 months agoranith
10 months agodivingbell17
10 months, 4 weeks agoalexvno
11 months, 1 week agoaragorn_brego
1 year agoGulenur_GS
11 months, 3 weeks ago