Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam SnowPro Core All Questions

View all questions & answers for the SnowPro Core exam

Exam SnowPro Core topic 1 question 339 discussion

Actual exam question from Snowflake's SnowPro Core
Question #: 339
Topic #: 1
[All SnowPro Core Questions]

A company needs to read multiple terabytes of data for an initial load as part of a Snowflake migration. The company can control the number and size of CSV extract files.

How does Snowflake recommend maximizing the load performance?

  • A. Use auto-ingest Snowpipes to load large files in a serverless model.
  • B. Produce the largest files possible, reducing the overall number of files to process.
  • C. Produce a larger number of smaller files and process the ingestion with size Small virtual warehouses.
  • D. Use an external tool to issue batched row-by-row inserts within BEGIN TRANSACTION and COMMIT commands.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
halol
Highly Voted 1 year, 11 months ago
Selected Answer: C
c i think https://www.analytics.today/blog/top-3-snowflake-performance-tuning-tactics#:~:text=Avoid%20Scanning%20Files&text=Before%20copying%20data%2C%20Snowflake%20checks,that%20have%20already%20been%20loaded.
upvoted 7 times
...
aemilka
Most Recent 2 months ago
Selected Answer: C
Split larger files into a greater number of smaller files to distribute the load among the compute resources in an active warehouse. The number of data files that are processed in parallel is determined by the amount of compute resources in a warehouse. We recommend splitting large files by line to avoid records that span chunks. https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare#:~:text=Split%20larger%20files,that%20span%20chunks.
upvoted 1 times
...
MultiCloudIronMan
1 year, 4 months ago
Selected Answer: C
correct
upvoted 1 times
...
OTE
1 year, 8 months ago
Selected Answer: C
I'd go for C. A severless approach (A) is usually not recommended for large files due to the higher costs.
upvoted 3 times
...
AS314
1 year, 10 months ago
https://www.snowflake.com/blog/best-practices-for-data-ingestion/ I think A is correct
upvoted 2 times
BigDataBB
1 year, 10 months ago
Snowpipe is designed for continuous ingestion and is built on COPY. The COPY command enables loading batches of data available in external cloud storage or an internal stage. So for the initial stage i think that i s a better solution COPY So the answer is C
upvoted 2 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...