exam questions

Exam Professional Cloud Architect All Questions

View all questions & answers for the Professional Cloud Architect exam

Exam Professional Cloud Architect topic 1 question 196 discussion

Actual exam question from Google's Professional Cloud Architect
Question #: 196
Topic #: 1
[All Professional Cloud Architect Questions]

Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity and the overall cost. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?

  • A. Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
  • B. Use the Data Transfer appliance to perform an offline migration.
  • C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.
  • D. Upload the data with gcloud storage cp.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
cchiaramelli
Highly Voted 1 year, 3 months ago
Selected Answer: D
This Transfer Appliance docs says it is suitable when "It would take more than one week to upload your data over the network" Since 10TB would take way less than a week for that bandwidth, I would go for D https://cloud.google.com/transfer-appliance/docs/4.0/overview#suitability
upvoted 13 times
exam4c3
3 months, 1 week ago
your link says: Transfer Appliance is a good fit for your data transfer needs if It would take more than one week to upload your data over the network, but this workload takes 1 day to complete by CLi
upvoted 1 times
...
...
mgm7
Highly Voted 1 year, 2 months ago
Selected Answer: B
maximum object size in GCS is 5TB
upvoted 6 times
mstaicu
7 months, 1 week ago
So how would it work with B ? The data needs to still end up in GCS. Also, who says the export is 1 large file ?
upvoted 2 times
...
klefevre08
1 year ago
so no answer is possible according to you ?...
upvoted 2 times
...
spuyol
1 year ago
you could solve that using split command
upvoted 1 times
...
...
ryaryarya
Most Recent 1 month ago
Selected Answer: D
No where in the question does it say the database export is one file (for anyone who is hung up on the 5TB object size limit). And even if it was, it still needs to end up in GCS, using the appliance doesn't solve that. Also it would be way more expensive and take a lot longer before your data would be available in GCS using the appliance.
upvoted 1 times
...
rrope
1 month, 2 weeks ago
Selected Answer: B
The best option is B. Use the Data Transfer appliance to perform an offline migration.
upvoted 1 times
...
VishalMoon
2 months ago
Selected Answer: B
I think the key factor here is "Google Recommended Practices". Based on this alone we have to select B. If this was not there and given the 1 GBPS speed, D would have been feasible.
upvoted 1 times
...
Zonci
8 months ago
Selected Answer: B
B. Use the Data Transfer Appliance to perform an offline migration. using the Data Transfer Appliance aligns with Google's recommended practice for large-scale migrations where bandwidth limitations are a concern, ensuring efficient, secure, and cost-effective transfer of your on-premises database export into Google Cloud Storage.
upvoted 3 times
desertlotus1211
2 months, 3 weeks ago
What about cost? And time to get to GCP Region/Zone? And chain of custody of the data as well as upload and available? What is the of cost of that? How ling will it take?
upvoted 1 times
...
...
Wasamela
9 months, 1 week ago
The current maximum object size supported by GCS is 5 TB, so it should be B
upvoted 1 times
...
smithloo
10 months, 1 week ago
thanks for sharing it <a href="https://www.qualitybacklink.net">Link building SEO</a>
upvoted 1 times
...
a53fd2c
10 months, 1 week ago
Option B. Cloud Storage object limit is 5 TB. https://cloud.google.com/storage/quotas?hl=en#objects https://cloud.google.com/storage/quotas?hl=en#objects
upvoted 2 times
...
madcloud32
11 months, 2 weeks ago
Selected Answer: B
Answer is B. Max cp limit of file is 5 TB
upvoted 4 times
...
JohnDohertyDoe
1 year, 1 month ago
Selected Answer: D
Since we would want to do it in the shortest time possible, using gsutil cp would take only 30 hours to move 10TB. So answer is D. https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets?hl=en#online_versus_offline_transfer
upvoted 5 times
...
Prudvi3266
1 year, 1 month ago
Selected Answer: D
As we have 1Ggigabit network we can transfer this with CLI command quicker than Transfer appliance. which takes time like more than week. Incase of lesser band width may be we use transfer appliance
upvoted 3 times
...
anjanc
1 year, 1 month ago
Selected Answer: B
ill go for b
upvoted 1 times
...
Roro_Brother
1 year, 2 months ago
Selected Answer: B
Because there is 10 TB of data, it's B
upvoted 3 times
...
MahAli
1 year, 2 months ago
Selected Answer: D
Appliance is overkilling for 30 hours cli command, the appliance could take more than a week with shipping, how many times you run jobs which take hours to completed? Many times I would go with D
upvoted 1 times
...
Nora9
1 year, 2 months ago
Selected Answer: B
option B (Use the Data Transfer appliance to perform an offline migration) seems to be the most appropriate. It addresses the need for a speedy transfer of a large amount of data and is a cost-effective solution recommended by Google for large-scale data migrations. This option circumvents potential network bandwidth limitations and provides a reliable way to transfer large datasets. Why option D is not a good choice : D. Upload the data with gcloud storage cp: This method uses the gcloud command-line tool to copy files to Cloud Storage. While simple, it might not be the most efficient for a 10-TB migration given the 1 Gbps bandwidth. The process could be slow and may require additional handling for potential interruptions and resuming uploads.
upvoted 5 times
MikeH20
1 year, 2 months ago
Respectfully, I feel option D is the better choice. According to https://cloud.google.com/transfer-appliance/docs/4.0/overview#suitability, Google recommends using a transfer appliance if the data transfer would take longer than 1 week. At 1 Gbps and 10TB of data, the transfer would take 30 hours (https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#online_versus_offline_transfer). We also need to take into account shipping time for the appliance itself, to and from. That would take a couple weeks. The question mentions to minimize time and cost, and follow Google best practices. In this case, D checks those boxes. The question does not mention a customer concern about network interruptions. If they did, then B could be argued as the more appropriate answer.
upvoted 3 times
...
...
Murtuza
1 year, 4 months ago
Duplicate so review question #146 and the choice over there is offline transfer appliance.This is a tricky question
upvoted 3 times
Gino17m
9 months, 2 weeks ago
No exactly duplicate. In #146 choice betwee Data Transfer Appliance and gsutil -m. here between DTA and gcloud storage cp....but stilll a tricky question
upvoted 1 times
...
LifeWins
1 year, 2 months ago
Answer has to be D.
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago