HOTSPOT -
You are building an Azure Data Factory solution to process data received from Azure Event Hubs, and then ingested into an Azure Data Lake Storage Gen2 container.
The data will be ingested every five minutes from devices into JSON files. The files have the following naming pattern.
/{deviceType}/in/{YYYY}/{MM}/{DD}/{HH}/{deviceID}_{YYYY}{MM}{DD}HH}{mm}.json
You need to prepare the data for batch data processing so that there is one dataset per hour per deviceType. The solution must minimize read times.
How should you configure the sink for the copy activity? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
ItHYMeRIsh
Highly Voted 3 years agoBro111
2 years agosensaint
2 years agoonyerleft
Highly Voted 2 years, 12 months agoDavico93
2 years, 5 months agorenan_ineu
Most Recent 2 months, 3 weeks agoELJORDAN23
11 months agoj888
10 months, 2 weeks agoblazy002
12 months agophydev
1 year, 1 month agoChemmangat
1 year, 3 months agokkk5566
1 year, 3 months agopavankr
1 year, 5 months agorocky48
1 year, 6 months agorzeng
2 years, 1 month agoDeeksha1234
2 years, 4 months agoRafafouille76
2 years, 9 months agokamil_k
2 years, 9 months agoJaws1990
2 years, 11 months agoCanary_2021
2 years, 11 months agojv2120
2 years, 12 months agotony4fit
2 years, 12 months agoAditya0891
2 years, 6 months ago