Which statement describes Delta Lake Auto Compaction?
A.
An asynchronous job runs after the write completes to detect if files could be further compacted; if yes, an OPTIMIZE job is executed toward a default of 1 GB.
B.
Before a Jobs cluster terminates, OPTIMIZE is executed on all tables modified during the most recent job.
C.
Optimized writes use logical partitions instead of directory partitions; because partition boundaries are only represented in metadata, fewer small files are written.
D.
Data is queued in a messaging bus instead of committing data directly to memory; all data is committed from the messaging bus in one batch once the job is complete.
E.
An asynchronous job runs after the write completes to detect if files could be further compacted; if yes, an OPTIMIZE job is executed toward a default of 128 MB.
If you go through this docs - then one thing is clear that it is not async job, so we have to eliminate A & C. D is wrong. It has no special job wrt the partition. Also file size 0f 128 MB is legacy config, latest one is dynamic. So we are left with B
A and E are wrong because auto compaction is synchronous operation!
I vote for B
As per documentation - "Auto compaction occurs after a write to a table has succeeded and runs synchronously on the cluster that has performed the write. Auto compaction only compacts files that haven’t been compacted previously."
https://docs.delta.io/latest/optimizations-oss.html
E. An asynchronous job runs after the write completes to detect if files could be further compacted; if yes, an OPTIMIZE job is executed toward a default of 128 MB.
https://community.databricks.com/t5/data-engineering/what-is-the-difference-between-optimize-and-auto-optimize/td-p/21189
Optimize default target file size is 1Gb, however in this question we are dealing with auto compaction. Which when enabled runs optimize with 128MB file size by default.
Delta Lake's Auto Compaction feature is designed to improve the efficiency of data storage by reducing the number of small files in a Delta table. After data is written to a Delta table, an asynchronous job can be triggered to evaluate the file sizes. If it determines that there are a significant number of small files, it will automatically run the OPTIMIZE command, which coalesces these small files into larger ones, typically aiming for files around 1 GB in size for optimal performance.
E is incorrect because the statement is similar to A but with an incorrect default file size target.
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
vish9
3 weeks agorrprofessional
3 weeks, 3 days agoakashdesarda
1 month, 3 weeks agopk07
1 month, 4 weeks agopartha1022
3 months, 1 week agoShailly
4 months agoimatheushenrique
5 months, 3 weeks agoojudz08
9 months, 2 weeks agoDAN_H
9 months, 4 weeks agokz_data
10 months, 2 weeks agoIWantCerts
10 months, 2 weeks agoYogi05
11 months agohamzaKhribi
11 months, 4 weeks agoaragorn_brego
1 year agoKill9
5 months agoBIKRAM063
1 year agosturcu
1 year, 1 month agoEertyy
1 year, 2 months ago