exam questions

Exam AWS Certified Solutions Architect - Associate SAA-C03 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Associate SAA-C03 exam

Exam AWS Certified Solutions Architect - Associate SAA-C03 topic 1 question 33 discussion

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?

  • A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.
  • B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
  • C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
  • D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ArielSchivo
Highly Voted 2 years, 4 months ago
Selected Answer: C
I would go for C. The tricky phrase is "near-real-time solution", pointing to Firehouse, but it can't send data to DynamoDB, so it leaves us with C as best option. Kinesis Data Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic, Dynatrace, Sumologic, LogicMonitor, MongoDB, and HTTP End Point as destinations. https://aws.amazon.com/kinesis/data-firehose/faqs/#:~:text=Kinesis%20Data%20Firehose%20currently%20supports,HTTP%20End%20Point%20as%20destinations.
upvoted 96 times
Lonojack
2 years, 1 month ago
This was a really tough one. But you have the best explanation on here with reference point. Thanks. I’m going with answer C!
upvoted 4 times
...
SaraSundaram
1 year, 11 months ago
There are many questions having Firehose and Stream. Need to know them in detail to answer. Thanks for the explanation
upvoted 4 times
diabloexodia
1 year, 7 months ago
Stream is used if you want real time results , but with firehose , you generally use the data at a later point of time by storing it somewhere. Hence if you see "REAL TIME" the answer is most probably Kinesis Data Streams.
upvoted 19 times
...
...
lizzard812
2 years, 1 month ago
Sorry but I still can't see how Kinesis Data Stream is 'scalable', since you have to provision the quantity of shards in advance?
upvoted 1 times
habibi03336
2 years ago
"easily stream data at any scale" This is a description of Kinesis Data Stream. I think you can configure its quantity but still not provision and manage scalability by yourself.
upvoted 1 times
...
...
...
JesseeS
Highly Voted 2 years, 4 months ago
The answer is C, because Firehose does not suppport DynamoDB and another key word is "data" Kinesis Data Streams is the correct choice. Pay attention to key words. AWS likes to trick you up to make sure you know the services.
upvoted 33 times
...
kyd0nix
Most Recent 1 month ago
Selected Answer: B
IMO "near-real-time" is key for Firehose, BUT since of all the discussions B vs C (Firehose can't have DynamoDB as destination, I think the question is misswritten and has to be reviewed to avoid the confusion)
upvoted 1 times
...
FlyingHawk
1 month, 2 weeks ago
Selected Answer: C
C - Kinesis Data Streams allows for low-latency processing, which is crucial for near-real-time requirements.
upvoted 1 times
...
MGKYAING
2 months, 1 week ago
Selected Answer: C
Scalable processing: The system must scale to handle hundreds of thousands of users and millions of transactions during peak hours. Near-real-time sharing of transactions: Transactions should be shared with internal applications in near-real time. Sensitive data removal: Sensitive information must be processed and removed before storage. Low-latency retrieval: The processed data must be stored in a document database (Amazon DynamoDB) for quick access.
upvoted 1 times
...
aefuen1
2 months, 3 weeks ago
Selected Answer: B
It's B. You can write to the DynamoDB table from the lambda preprocessing function. Also option C can't be correct because if "Other applications can consume the transactions data off the Kinesis data stream", this means they will consume data with sensitive values, which is a constraint for the solution.
upvoted 2 times
kernel1
3 weeks, 1 day ago
The question says it needs to have sensitive information removed before DB storage. Other apps could consume transaction data but not necessarily store, or consume only non-sensitive data.
upvoted 1 times
...
...
engnrshon
3 months, 2 weeks ago
C :
upvoted 1 times
...
Mauro0001
6 months, 1 week ago
Selected Answer: C
One of the tricky phrases is 'near-real-time solutions' because it points to the fact that every time a write is made to a database, it incurs a delay, and then retrieving it with an API call adds another latency. With Kinesis Data Streams, that process is optimized because the intermediary that gives you the ability to write to DynamoDB also provides that data to other services due to the retention period of Kinesis Data Streams.
upvoted 2 times
...
PaulGa
6 months, 1 week ago
Selected Answer: C
Ans C. High level difference between the Kinesis and DynamoDB: Kinesis Streams allows production/ consumption of large volumes of data (web data, logs, etc); DynamoDB Streams is a feature local to DynamoDB to track the granular changes to DynamoDB table items. (Note also: data latency for Firehose is 60 seconds or higher; Streams is for custom processing and has sub-second processing latency).
upvoted 2 times
...
Lin878
8 months ago
Selected Answer: C
Q: What is a destination in Firehose? A destination is the data store where your data will be delivered. Firehose currently supports Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Datadog, NewRelic, Dynatrace, Sumo Logic, LogicMonitor, MongoDB, and HTTP End Point as destinations. https://aws.amazon.com/firehose/faqs/
upvoted 2 times
...
the_mellie
9 months, 2 weeks ago
Selected Answer: C
with multiple consumers and on the fly modification, it seems like the most logical choice
upvoted 2 times
...
vi24
1 year ago
I chose B. The "near real time" is very specific to Kinesis firehose which is a better option anyway. The rest of the answer makes sense too. C is wrong : "sensitive data removed by Lambda & then store transaction data in DynamoDB" , while it continues to say other applications are accessing the transaction data from kinesis Data stream !!
upvoted 3 times
...
Pics00094
1 year ago
Selected Answer: C
need to know.. 1) Lambda Integration 2) Difference between Real time(Kinesis Data Stream) vs Near Real time(Kinesis Fire House) 3) Firehouse can't target DynamoDB
upvoted 5 times
...
JulianWaksmann
1 year, 1 month ago
i think c are bad too, because it isn't near real time.
upvoted 2 times
...
awsgeek75
1 year, 1 month ago
Selected Answer: C
A: DynamoDB streams are logs, not fit for real-time sharing. B: S3 is not document database, it's BLOB D: S3 and files are not database C: Kinesis + Lambda + DynamoDB is high performance, low latency scalable solution.
upvoted 3 times
...
A_jaa
1 year, 1 month ago
Selected Answer: C
Answer-C
upvoted 1 times
...
bujuman
1 year, 2 months ago
Selected Answer: C
Data Stream can handle near-real-time and is able to store to DynamoDB
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago