You use BigQuery as your centralized analytics platform. New data is loaded every day, and an ETL pipeline modifies the original data and prepares it for the final users. This ETL pipeline is regularly modified and can generate errors, but sometimes the errors are detected only after 2 weeks. You need to provide a method to recover from these errors, and your backups should be optimized for storage costs. How should you organize your data in BigQuery and store your backups?
Ganshank
Highly Voted 5 years, 2 months agoLanro
Highly Voted 1 year, 11 months agodesertlotus1211
Most Recent 3 months, 3 weeks agogrshankar9
5 months, 3 weeks agodesertlotus1211
3 months, 3 weeks agocloud_rider
7 months, 1 week agoSamuelTsch
8 months, 2 weeks agoLenifia
1 year agozevexWM
1 year, 2 months agoFarah_007
1 year, 2 months agoNirca
1 year, 8 months agoBahubali1988
1 year, 9 months agockanaar
1 year, 9 months agoarien_chen
1 year, 10 months agovamgcp
1 year, 11 months agophidelics
2 years agocetanx
2 years agocetanx
2 years agosdi_studiers
2 years agoWillemHendr
2 years, 1 month ago