The correct answer is D. The inference of incrementally processed records as soon as a trigger is hit.
In this context, a “trigger” refers to the condition that initiates the processing of the next set of data. This could be a time interval (e.g., process new data every second), a data size (e.g., process every 1000 records), or other custom conditions
Streaming with Spark as a model deployment strategy involves processing data in small, incremental batches (micro-batches) as it arrives. Spark Structured Streaming allows for continuous processing of streaming data, where the data is processed incrementally and results are updated in real-time. The processing is typically triggered at regular intervals, known as trigger
Incrementally processed records with a Spark job: Spark jobs are typically used for initiating processing, but triggers are more common for continuous inference in streaming scenarios.
upvoted 1 times
...
Log in to ExamTopics
Sign in:
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
BokNinja
Highly Voted 11 months, 1 week ago64934ca
Most Recent 4 months, 3 weeks agoJaydeepT
9 months, 4 weeks ago