Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Cloud Architect All Questions

View all questions & answers for the Professional Cloud Architect exam

Exam Professional Cloud Architect topic 1 question 160 discussion

Actual exam question from Google's Professional Cloud Architect
Question #: 160
Topic #: 1
[All Professional Cloud Architect Questions]

The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?

  • A. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with serverName ג€" Timestamp ג€¢ Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket.
  • B. ג€¢ Batch every 10,000 events with a single manifest file for metadata ג€¢ Compress event files and manifest file into a single archive file ג€¢ Name files using serverName ג€" EventSequence ג€¢ Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
  • C. ג€¢ Compress individual files ג€¢ Name files with serverName ג€" EventSequence ג€¢ Save files to one bucket ג€¢ Set custom metadata headers for each object after saving
  • D. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with a random prefix pattern ג€¢ Save files to one bucket
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
rishab86
Highly Voted 3 years, 5 months ago
answer is definitely D https://cloud.google.com/storage/docs/request-rate#naming-convention "A longer randomized prefix provides more effective auto-scaling when ramping to very high read and write rates. For example, a 1-character prefix using a random hex value provides effective auto-scaling from the initial 5000/1000 reads/writes per second up to roughly 80000/16000 reads/writes per second, because the prefix has 16 potential values. If your use case does not need higher rates than this, a 1-character randomized prefix is just as effective at ramping up request rates as a 2-character or longer randomized prefix." Example: my-bucket/2fa764-2016-05-10-12-00-00/file1 my-bucket/5ca42c-2016-05-10-12-00-00/file2 my-bucket/6e9b84-2016-05-10-12-00-01/file3
upvoted 36 times
...
kopper2019
Highly Voted 3 years, 5 months ago
- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 6 For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you do? A. Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for analysis. B. Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set. C. Use the gcloud recommender command to list the idle virtual machine instances. D. From the Google Console, identify which Compute Engine instances in the managed instance groups are no longer responding to health check probes.
upvoted 6 times
kravenn
3 years, 3 months ago
answer C
upvoted 2 times
...
juccjucc
3 years, 4 months ago
is it C?
upvoted 1 times
cloudstd
3 years, 4 months ago
this is not 100% accurate. you should investigate if you doubt if is incorrect https://cloud.google.com/compute/docs/instances/viewing-and-applying-idle-vm-recommendations
upvoted 4 times
Papafel
3 years, 4 months ago
The correct answer is A
upvoted 1 times
matmuh
3 years ago
Absulatly C
upvoted 2 times
...
...
squishy_fishy
11 months, 2 weeks ago
The correct answer is C based on the URL you shared. gcloud recommender recommendations list \ --project=PROJECT_ID \ --location=ZONE \ --recommender=google.compute.instance.IdleResourceRecommender \ --format=yaml
upvoted 1 times
...
...
...
KS1911
3 years, 4 months ago
I have my exam scheduled after 3 days. Would there be more questions coming on ExamTopics?
upvoted 3 times
...
cloudstd
3 years, 4 months ago
answer: C
upvoted 8 times
...
...
Sephethus
Most Recent 5 months, 1 week ago
This question is messed up. The formatting, the discussion, everything. I have no idea what to choose here. Chat GPT thinks the answer is C but most think it is D and there's not much difference between the two answers.
upvoted 1 times
...
squishy_fishy
11 months, 2 weeks ago
The question is how to reduce the data loss, the answer should be something like separation of duty, data lost prevention, but answer D is for reducing latency retrieving data. I'm baffled by this question.
upvoted 2 times
...
marcohol
1 year, 1 month ago
I agree with D, but then, using a random prefix wouldn't it make more difficult the file retrieve?
upvoted 1 times
...
ptsironis
1 year, 6 months ago
Selected Answer: B
Why not option B??
upvoted 3 times
...
nunopires2001
1 year, 10 months ago
I was thinking correct answer was A, because we should have some kind of bucket rotation in order to avoid hiting the max size of a bucket. However it seems there is no size limit for a GCP cloud bucket, so I will have to agree with community and stick to answer D.
upvoted 1 times
...
Mahmoud_E
2 years, 1 month ago
Selected Answer: D
D is the correct answer https://cloud.google.com/storage/docs/request-rate#naming-convention
upvoted 2 times
...
Pime13
2 years, 10 months ago
D: https://cloud.google.com/storage/docs/request-rate#naming-convention
upvoted 2 times
...
vincy2202
2 years, 11 months ago
Selected Answer: D
D is the correct answer
upvoted 2 times
...
joe2211
3 years ago
Selected Answer: D
vote D
upvoted 5 times
...
amxexam
3 years, 2 months ago
Request admin to intervene and delete the hijacking of the question by kopper2019
upvoted 4 times
Examster1
3 years, 2 months ago
Use the material for study dude! Hello? Anyone home?
upvoted 5 times
...
Arad
3 years ago
it looks like this website does not have any admin
upvoted 1 times
...
...
kopper2019
3 years, 5 months ago
- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 5 For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost- effective approach for storing their race data such as telemetry. They want to keep all historical records, train models using only the previous season's data, and plan for data growth in terms of volume and information collected. You need to propose a data solution. Considering HRL business requirements and the goals expressed by CEO S. Hawke, what should you do? A. Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data by season and event. B. Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data using season as a primary key. C. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on season. D. Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use separate database instances for each season.
upvoted 3 times
cloudstd
3 years, 4 months ago
answer: C
upvoted 2 times
Papafel
3 years, 4 months ago
Yes answer is C
upvoted 2 times
...
...
juccjucc
3 years, 4 months ago
is it C? all these questions are from the new exam? why they are here in the comments and not as questions in the list?
upvoted 2 times
kopper2019
3 years, 4 months ago
because exam was not updated so I added the Qs but they added this new Qs as normal now we have 218 Qs
upvoted 4 times
Roncy
3 years, 1 month ago
Hey Kopper, when would you provide the new set of questions ?
upvoted 1 times
...
...
...
kravenn
3 years, 3 months ago
answer: C
upvoted 1 times
...
...
kopper2019
3 years, 5 months ago
- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 4 For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand and interpret the predictions. What should you do? A. Use Explainable AI. B. Use Vision AI. C. Use Google Cloud’s operations suite. D. Use Jupyter Notebooks.
upvoted 3 times
Sephethus
5 months, 1 week ago
what does this have to do with the cloud storage question?
upvoted 1 times
...
cloudstd
3 years, 4 months ago
answer: A
upvoted 4 times
...
juccjucc
3 years, 4 months ago
is it A?
upvoted 2 times
Papafel
3 years, 4 months ago
Yes answer is A
upvoted 1 times
...
...
kravenn
3 years, 3 months ago
answer A
upvoted 1 times
...
...
kopper2019
3 years, 5 months ago
- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 3 For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf. The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do? A. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function. B. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function. C. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function. D. Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.
upvoted 2 times
Amrit123
3 years, 1 month ago
C, is the right answer. The scheduler would run without a trigger even though the release has not been done. If you read (application as soon as it is released ), the time is not certain. So, the answer is C. Check out the last 30 questions, would give a better idea as there is a separate discussion
upvoted 2 times
...
esc
3 years, 4 months ago
answer : A
upvoted 5 times
vchrist
2 years, 12 months ago
why A? Does Cloud Storage make sense ?
upvoted 1 times
...
jask
3 years, 2 months ago
in option A what is the use of Cloud storage bucket? In my opinion answer is C.
upvoted 3 times
...
Papafel
3 years, 4 months ago
Answer is A
upvoted 2 times
...
...
cloudmon
2 years, 7 months ago
I would go with C https://cloud.google.com/source-repositories/docs/code-change-notification
upvoted 1 times
BiddlyBdoyng
1 year, 5 months ago
It's probably C due to pub sub on Cloud Deploy rather than source repos https://cloud.google.com/deploy/docs/subscribe-deploy-notifications
upvoted 1 times
...
...
...
kopper2019
3 years, 5 months ago
- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview QUESTION 2 For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are a member of the HRL security team and you need to configure the update that will allow only the Fastly IP address ranges through the External HTTP(S) load balancer. Which command should you use?
upvoted 1 times
kopper2019
3 years, 5 months ago
A. gcloud compute security-policies rules update 1000 \ --security-policy from-fastly \ --src-ip-ranges * \ --action “allow” B. gcloud compute firewall rules update sourceiplist-fastly \ --priority 100 \ --allow tcp:443 C. gcloud compute firewall rules update hir-policy \ --priority 100 \ --target-tags=sourceiplist-fastly \ --allow tcp:443 D. gcloud compute security-policies rules update 1000 \ --security-policy hir-policy \ --expression “evaluatePreconfiguredExpr(‘sourceiplist-fastly’)” \ --action “allow”
upvoted 1 times
cloudstd
3 years, 4 months ago
answer: D
upvoted 6 times
Papafel
3 years, 4 months ago
Answer is A
upvoted 2 times
matmuh
3 years ago
A is incorrect : To match all IPs specify * https://cloud.google.com/sdk/gcloud/reference/compute/security-policies/rules/update
upvoted 1 times
...
...
...
kravenn
3 years, 3 months ago
answer D
upvoted 4 times
...
xavi1
3 years, 3 months ago
both A and D have correct syntax, but src-ip-ranges cannot be "*", correct is D
upvoted 5 times
cloudmon
2 years, 7 months ago
I agree
upvoted 1 times
...
...
...
...
kopper2019
3 years, 5 months ago
- New Q, 06/2021 Helicopter Racing League Testlet 1 Company overview Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race. Solution concept HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging regions, they want to move the serving of their content, both real-time and recorded, closer to their users.
upvoted 1 times
kopper2019
3 years, 5 months ago
Existing technical environment HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing public cloud provider. Their existing technical environment is as follows: - Existing content is stored in an object storage service on their existing public cloud provider. Video encoding and transcoding is performed on VMs created for each job. Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.
upvoted 1 times
kopper2019
3 years, 5 months ago
Business requirements HRL’s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are: Support ability to expose the predictive models to partners. Increase predictive capabilities during and before races: ○ Race results ○ Mechanical failures ○ Crowd sentiment Increase telemetry and create additional insights. Measure fan engagement with new predictions. Enhance global availability and quality of the broadcasts. Increase the number of concurrent viewers. Minimize operational complexity. Ensure compliance with regulations. Create a merchandising revenue stream. Technical requirements Maintain or increase prediction throughput and accuracy. Reduce viewer latency. Increase transcoding performance. Create real-time analytics of viewer consumption patterns and engagement. Create a data mart to enable processing of large volumes of race data.
upvoted 1 times
kopper2019
3 years, 5 months ago
Executive statement Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g., overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real- time predictions during races and the capacity to process season-long results.
upvoted 1 times
kopper2019
3 years, 5 months ago
QUESTION 1 For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers, and season ticket holders. You need to implement a custom card tokenization service that meets the following requirements: • It must provide low latency at minimal cost. • It must be able to identify duplicate credit cards and must not store plaintext card numbers. • It should support annual key rotation. Which storage approach should you adopt for your tokenization service? A. Store the card data in Secret Manager after running a query to identify duplicates. B. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode. C. Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances. D. Use column-level encryption to store the data in Cloud SQL.
upvoted 2 times
SPNBLUE
3 years, 3 months ago
Why D ?
upvoted 1 times
...
...
...
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...