Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Data Engineer All Questions

View all questions & answers for the Professional Data Engineer exam

Exam Professional Data Engineer topic 1 question 21 discussion

Actual exam question from Google's Professional Data Engineer
Question #: 21
Topic #: 1
[All Professional Data Engineer Questions]

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency?

  • A. Assign global unique identifiers (GUID) to each data entry.
  • B. Compute the hash value of each data entry, and compare it with all historical data.
  • C. Store each data entry as the primary key in a separate database and apply an index.
  • D. Maintain a database table to store the hash value and other metadata for each data entry.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
dg63
Highly Voted 4 years, 4 months ago
The best answer is "A". Answer "D" is not as efficient or error-proof due to two reasons 1. You need to calculate hash at sender as well as at receiver end to do the comparison. Waste of computing power. 2. Even if we discount the computing power, we should note that the system is sending inventory information. Two messages sent at different can denote same inventory level (and thus have same hash). Adding sender time stamp to hash will defeat the purpose of using hash as now retried messages will have different timestamp and a different hash. if timestamp is used as message creation timestamp than that can also be used as a UUID.
upvoted 66 times
emmylou
1 year, 1 month ago
If you add a unique ID aren't you by definition not getting a duplicate record. Honestly I hate all these answers.
upvoted 3 times
billalltf
6 months, 2 weeks ago
You can add a function or condition that verifies if the global unique id already exists or just do a deduplication later
upvoted 1 times
...
...
retax
4 years, 1 month ago
If the goal is to ensure at least ONE of each pair of entries is inserted into the db, then how is assigning a GUID to each entry resolving the duplicates? Keep in mind if the 1st entry fails, then hopefully the 2nd (duplicate) is successful.
upvoted 13 times
ralf_cc
3 years, 5 months ago
A - In D, same message with different timestamp will have different hash, though the message content is the same.
upvoted 12 times
MaxNRG
2 years, 10 months ago
agreed, the key here is "payload of several fields and the timestamp"
upvoted 2 times
MaxNRG
2 years, 10 months ago
"payload of several fields and the timestamp of the transmission"
upvoted 2 times
BigDataBB
2 years, 10 months ago
Hi Max, I also think that the hash value would be worng because the timestamp is part of payload and is not written that the hash value is generated without the ts; but it also not written if GUID is linked or not with sending. Often this is a point where the answer is vague. Because don't specify if the GUID is related to the data or to the send.
upvoted 1 times
...
...
...
omakin
3 years, 4 months ago
Strong Answer is A - in another question on the gcp sample questions: the correct answer to that particular question was "You are building a new real-time data warehouse for your company and will use BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data. Which query type should you use?" This means you need a "uniqueid" and timestamps to properly dedupe a data.
upvoted 8 times
Tanzu
2 years, 10 months ago
U need a uniqueid but in this scenario, there is none. So u have to calculate by hashing w/ some of the fields in the dataset. A is assigning guid in processing side will not solve the issue. Cause u will assign diff. ids...
upvoted 1 times
cetanx
1 year, 10 months ago
Answer - D Key statement is "Transmitted data includes a payload of several fields and the timestamp of the transmission." So the timestamp is appended to message while sending, in other words that field is subject to change if message is retransmitted. However, adding a GUID doesn't help much because if message is transmitted twice you will have different GUID for both messages but they will be the same/duplicate data. You can simply calculate a hash based on not all data but from a select of columns (with the payload of several fields AND definitely by excluding the timestamp). By doing so, you can assure a different hash for each message.
upvoted 5 times
...
...
...
...
MarcoDipa
2 years, 11 months ago
Answer is D. Using Hash values we can remove duplicate values from a database. Hash values will be same for duplicate data and thus can be easily rejected. Obviously you won't check hash for timestmp. D is better thatn B because maintaning a different table will reduce cost for hash computation for all historical data
upvoted 5 times
Mathew106
1 year, 4 months ago
Why can't it be A, where the GUID is a hash value? Why do we need to store the hash with the metadata in a separate database to do the deduplication?
upvoted 1 times
...
...
...
...
[Removed]
Highly Voted 4 years, 8 months ago
Answer: D Description: Using Hash values we can remove duplicate values from a database. Hashvalues will be same for duplicate data and thus can be easily rejected.
upvoted 24 times
stefanop
2 years, 7 months ago
Hash values for same data will be the same, but in this case data contains also the timestamp
upvoted 2 times
DGames
1 year, 11 months ago
While calculating Hash value we exclude the timestamp.
upvoted 1 times
...
...
...
vbrege
Most Recent 5 months, 1 week ago
1. My original vote was 'B'. I chose it over 'D' because option 'D' does not explicitly say anything about how that table will be used for deduplication. In hindsight, explicit usage of table should not be given much weightage so after review and seeing other comments, I thought of 'D' as the correct answer. 2. Now looking more clearly at option 'D' (and 'B' also), it's a little ambiguous of what keys will be used to create the hash. So, if you use the payload PLUS the timestamp, the hash is of no use. This is a little confusing 3. Finally, although I never thought this is the right option, 'A' seems to be the correct option. The GUID is created at Data entry, NOT at the transmission stage. So, the GUID should be representative of the payload only and NOT the timestamp which will make it unique per payload, not per transmission of the same payload. So, in the end, I feel like 'A' is the correct choice.
upvoted 1 times
...
TVH_Data_Engineer
11 months, 1 week ago
Selected Answer: D
To deduplicate the data most efficiently, especially in a cloud environment where the data is sent periodically and re-transmissions can occur, the recommended approach would be: D. Maintain a database table to store the hash value and other metadata for each data entry. This approach allows you to quickly check if an incoming data entry is a duplicate by comparing hash values, which is much faster than comparing all fields of a data entry. The metadata, which includes the timestamp and possibly other relevant information, can help resolve any ambiguities that may arise if the hash function ever produces collisions.
upvoted 1 times
...
JustQ
1 year ago
B. Compute the hash value of each data entry, and compare it with all historical data. Explanation: Efficiency: Hashing is a fast and efficient operation, and comparing hash values is generally quicker than comparing the entire payload or using other methods. Space Efficiency: Storing hash values requires less storage space compared to storing entire payloads or using global unique identifiers (GUIDs). Deduplication: By computing the hash value of each data entry and comparing it with historical data, you can easily identify duplicate transmissions. If the hash value matches an existing one, it indicates that the payload is the same.
upvoted 3 times
...
steghe
1 year ago
I though the answer was A 'cos it's more efficient. But I read the answer with more attention: GUID is given "at each data entry". It's not said that GUID was given from publisher. If GUID is given in data entry (subscriber), two equal messages can have different GUID. D is not complete 'cos it's not so precise about hash field that are used. I'm in doubt on this answer :-(
upvoted 2 times
Lestrang
8 months, 1 week ago
Data entry means record, it is not an action. that means that each record will have a unique id. so assuming our sink will not accept duplicates based on a key, the GUID will work.
upvoted 1 times
...
...
rocky48
1 year ago
Selected Answer: A
Answer : A "D" is not as efficient or error-proof due to two reasons 1. You need to calculate hash at sender as well as at receiver end to do the comparison. Waste of computing power. 2. Even if we discount the computing power, we should note that the system is sending inventory information. Two messages sent at different can denote same inventory level (and thus have same hash). Adding sender time stamp to hash will defeat the purpose of using hash as now retried messages will have different timestamp and a different hash. if timestamp is used as message creation timestamp than that can also be used as a UUID.
upvoted 1 times
...
rtcpost
1 year, 1 month ago
Selected Answer: D
D. Maintain a database table to store the hash value and other metadata for each data entry. Storing a database table with hash values and metadata is an efficient way to deduplicate data. When new data is transmitted, you can calculate the hash of the payload and check whether it already exists in the database. This approach allows for efficient duplicate detection without the need to compare the new data with all historical data. It's a common and scalable technique used to ensure data consistency and avoid processing the same data multiple times. Options A (assigning GUIDs to each data entry) and C (storing each data entry as the primary key) can work, but they might be less efficient than using hash values when dealing with a large volume of data. Option B (computing the hash value of each data entry and comparing it with all historical data) can be computationally expensive and slow, especially if there's a significant amount of historical data to compare against. Storing hash values in a table allows for fast and efficient deduplication.
upvoted 1 times
...
alihabib
1 year, 3 months ago
Why not D ? Generate a Hash for payload entry and maintain the value as metadata. Do the validation check on Dataflow..... A GUID will generate 2 different entries for same payload entry, it will not tackle duplication check
upvoted 2 times
...
Hungry_guy
1 year, 3 months ago
Answer is B - although the time stamp is diff for each transmission - the hash value is computed for the payload, not for the timestamp - which is just an added field for transmission. So, has val remains the same for all transmissions of the same data - which is what we can use for comparision. So, much more efficient to just directly compare the hash values with the historical data - to check and remove duplicates - instead of again wasting space storing stuff - in option D
upvoted 3 times
...
Mark_86
1 year, 4 months ago
Selected Answer: D
This question is formulated very badly. From the way that A is formulated, you would not deduplicate but rather the duplicates would have the same GUID. Then we have D, which is storing the information (assuming the hash is created without the timestamp). B is doing it right away. D only alludes to the actual deduplication. But it would be more efficient.
upvoted 2 times
...
boca_2022
1 year, 7 months ago
Selected Answer: A
A is best choice. D doesn't make sense.
upvoted 2 times
FP77
1 year, 3 months ago
A is incorrect. how can you find duplicates if you assign a unique id to every record? The answer is either B or D. I first selected B, but reading through the answers D may be better.
upvoted 2 times
...
...
Melampos
1 year, 7 months ago
Selected Answer: D
you cannot deduplicate data adding a random guid, with guid row is distinct than others
upvoted 1 times
...
juliobs
1 year, 8 months ago
Hard question. It's a *proprietary* system. Who guarantees we can even add a GUID? But if you can, it's definitely more efficient than calculating hashes (ignoring timestamp).
upvoted 4 times
...
tibuenoc
1 year, 9 months ago
Selected Answer: A
As Dg63 wrote.
upvoted 2 times
...
AshokPalle
1 year, 9 months ago
Just asked Chatgpt, it gave me option D
upvoted 1 times
...
musumusu
1 year, 9 months ago
Answer B: Option A: GUIDs can deduplicate the data but is expensive and good for multiple data processing. Option B: Using hash function to authenticate the unique rows, this function can be applied directly in bigquery. Option D, is complex and more expensive. `` `CREATE TEMP FUNCTION hashValue(input STRING) AS ( CAST(FARM_FINGERPRINT(input) AS STRING) ); ``
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...