A data engineer must build an extract, transform, and load (ETL) pipeline to process and load data from 10 source systems into 10 tables that are in an Amazon Redshift database. All the source systems generate .csv, JSON, or Apache Parquet files every 15 minutes. The source systems all deliver files into one Amazon S3 bucket. The file sizes range from 10 MB to 20 GB. The ETL pipeline must function correctly despite changes to the data schema.
Which data pipeline solutions will meet these requirements? (Choose two.)
rralucard_
Highly Voted 8 months, 3 weeks agoFelix_G
7 months, 3 weeks agoLuke97
6 months, 3 weeks agoHagarTheHorrible
Most Recent 4 months agovaluedate
5 months agoOusseyni
6 months, 1 week agovaluedate
5 months agotgv
4 months, 3 weeks agoChristina666
6 months, 1 week agoarvehisa
6 months, 2 weeks agolucas_rfsb
6 months, 3 weeks agoFelix_G
7 months, 3 weeks agochris_spencer
6 months, 1 week agoevntdrvn76
8 months, 3 weeks ago