exam questions

Exam Professional Cloud Architect All Questions

View all questions & answers for the Professional Cloud Architect exam

Exam Professional Cloud Architect topic 1 question 196 discussion

Actual exam question from Google's Professional Cloud Architect
Question #: 196
Topic #: 1
[All Professional Cloud Architect Questions]

Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity and the overall cost. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?

  • A. Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
  • B. Use the Data Transfer appliance to perform an offline migration.
  • C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.
  • D. Upload the data with gcloud storage cp.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
cchiaramelli
Highly Voted 1 year, 5 months ago
Selected Answer: D
This Transfer Appliance docs says it is suitable when "It would take more than one week to upload your data over the network" Since 10TB would take way less than a week for that bandwidth, I would go for D https://cloud.google.com/transfer-appliance/docs/4.0/overview#suitability
upvoted 13 times
exam4c3
4 months, 4 weeks ago
your link says: Transfer Appliance is a good fit for your data transfer needs if It would take more than one week to upload your data over the network, but this workload takes 1 day to complete by CLi
upvoted 2 times
...
...
mgm7
Highly Voted 1 year, 3 months ago
Selected Answer: B
maximum object size in GCS is 5TB
upvoted 6 times
mstaicu
9 months ago
So how would it work with B ? The data needs to still end up in GCS. Also, who says the export is 1 large file ?
upvoted 2 times
...
klefevre08
1 year, 2 months ago
so no answer is possible according to you ?...
upvoted 2 times
...
spuyol
1 year, 1 month ago
you could solve that using split command
upvoted 1 times
...
...
frank_tsai_tech
Most Recent 2 weeks, 1 day ago
Selected Answer: D
Explanation: • With a 1 Gbps connection, transferring 10 TB online is practical—theoretical estimates show it can take around 22–30 hours if the link is fully utilized. • Compressing the data reduces the amount to transfer and the –m flag enables parallel (multi-threaded) uploads, further reducing transfer time. • This approach minimizes both the transfer time and overall cost, and it avoids the added delays and expense associated with ordering and shipping a Transfer Appliance. • Google’s best practices indicate that if you have sufficient network bandwidth (like 1 Gbps) and the dataset is in the 10–100 TB range, an online transfer using gsutil (with optimizations like compression) is the recommended approach.
upvoted 1 times
...
Blackstile
2 weeks, 2 days ago
https://cloud.google.com/storage-transfer/docs/transfer-options Transferring more than 1 TB from on-premises Use Storage Transfer Service or Transfer Appliance. This question has a trick, because the question talks about cost and Transfer Appliance is very expensive, if it were Storage Transfer Service then yes I would choose this option over gsuti.
upvoted 1 times
...
cloud_rider
3 weeks, 2 days ago
Selected Answer: D
D is correct for Saving time and cost optimized too
upvoted 1 times
...
PetarMarinkovic
1 month ago
Selected Answer: D
Transfer Appliance will take more time than gcloud storage cp :)
upvoted 2 times
...
david_tay
1 month, 1 week ago
Selected Answer: B
B is correct answer. I just did a through questioning in Gemini and this is a good explanation (after referring to official GCP resources too): Time: Transferring 10TB over a 1 Gbps connection will take a very long time (likely more than a day, even under ideal conditions). The gcloud storage cp command, while capable of resumable uploads for individual files, doesn't handle interruptions well for large directory transfers. Estimated 22 hours to complete transfer, provided in the most ideal situation for the network. But still a big risk if a large file is interrupted halfway and needs to start over. Although likely this 10TB transfer will be via many files, there still could be some large files in between. Hence to optimize cost and time B using transfer appliance is the answer.
upvoted 1 times
...
ryaryarya
2 months, 3 weeks ago
Selected Answer: D
No where in the question does it say the database export is one file (for anyone who is hung up on the 5TB object size limit). And even if it was, it still needs to end up in GCS, using the appliance doesn't solve that. Also it would be way more expensive and take a lot longer before your data would be available in GCS using the appliance.
upvoted 2 times
...
rrope
3 months, 1 week ago
Selected Answer: B
The best option is B. Use the Data Transfer appliance to perform an offline migration.
upvoted 1 times
...
VishalMoon
3 months, 3 weeks ago
Selected Answer: B
I think the key factor here is "Google Recommended Practices". Based on this alone we have to select B. If this was not there and given the 1 GBPS speed, D would have been feasible.
upvoted 1 times
...
Zonci
9 months, 3 weeks ago
Selected Answer: B
B. Use the Data Transfer Appliance to perform an offline migration. using the Data Transfer Appliance aligns with Google's recommended practice for large-scale migrations where bandwidth limitations are a concern, ensuring efficient, secure, and cost-effective transfer of your on-premises database export into Google Cloud Storage.
upvoted 3 times
desertlotus1211
4 months, 1 week ago
What about cost? And time to get to GCP Region/Zone? And chain of custody of the data as well as upload and available? What is the of cost of that? How ling will it take?
upvoted 1 times
...
...
Wasamela
10 months, 4 weeks ago
The current maximum object size supported by GCS is 5 TB, so it should be B
upvoted 1 times
...
smithloo
11 months, 3 weeks ago
thanks for sharing it <a href="https://www.qualitybacklink.net">Link building SEO</a>
upvoted 1 times
...
a53fd2c
11 months, 4 weeks ago
Option B. Cloud Storage object limit is 5 TB. https://cloud.google.com/storage/quotas?hl=en#objects https://cloud.google.com/storage/quotas?hl=en#objects
upvoted 2 times
...
madcloud32
1 year, 1 month ago
Selected Answer: B
Answer is B. Max cp limit of file is 5 TB
upvoted 4 times
...
JohnDohertyDoe
1 year, 2 months ago
Selected Answer: D
Since we would want to do it in the shortest time possible, using gsutil cp would take only 30 hours to move 10TB. So answer is D. https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets?hl=en#online_versus_offline_transfer
upvoted 6 times
...
Prudvi3266
1 year, 3 months ago
Selected Answer: D
As we have 1Ggigabit network we can transfer this with CLI command quicker than Transfer appliance. which takes time like more than week. Incase of lesser band width may be we use transfer appliance
upvoted 4 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago