exam questions

Exam AWS Certified Data Analytics - Specialty All Questions

View all questions & answers for the AWS Certified Data Analytics - Specialty exam

Exam AWS Certified Data Analytics - Specialty topic 1 question 152 discussion

A company needs to implement a near-real-time messaging system for hotel inventory. The messages are collected from 1,000 data sources and contain hotel inventory data. The data is then processed and distributed to 20 HTTP endpoint destinations. The range of data size for messages is 2-500 KB.
The messages must be delivered to each destination in order. The performance of a single destination HTTP endpoint should not impact the performance of the delivery for other destinations.
Which solution meets these requirements with the LOWEST latency from message ingestion to delivery?

  • A. Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create 30 AWS Lambda functions to read these messages and send the messages to each destination endpoint.
  • B. Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create a single enhanced fan-out AWS Lambda function to read these messages and send the messages to each destination endpoint. Register the function as an enhanced fan-out consumer.
  • C. Create an Amazon Kinesis Data Firehose delivery stream, and ingest the data for each source into the stream. Configure Kinesis Data Firehose to deliver the data to an Amazon S3 bucket. Invoke an AWS Lambda function with an S3 event notification to read these messages and send the messages to each destination endpoint.
  • D. Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create 20 enhanced fan-out AWS Lambda functions to read these messages and send the messages to each destination endpoint. Register the 20 functions as enhanced fan-out consumers.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
silvaa360
Highly Voted 2 years, 4 months ago
I would pick D over B. 1) The question is asking for the LEAST latency. It doesn't matter if we will have high costs, setup or complexity. 2) What we will be the benefit of having 1 lambda (aka 1 consumer) with EFO? The EFO exists exatly to be possible to have multiple consumers without one affecting each other. 3) The parallelism factor will increase throughput, but we will still have 1 consumer getting data in parallel from the same shard, it is not like we will have multiple lambda executions writting to different HTTP endpoints. 4) With 1 lambda, we will need to write to the HTTP sequentially, like http1.write, http2.write, etc. 5) To finalize, I think that EFO with one consumer is irrelevant, because you will get 2MB/s even without EFO. I would go for D all the way. Please, if there is people going around this, just share your thoughts.
upvoted 12 times
rocky48
2 years, 4 months ago
requirement => LOWEST latency from message ingestion to delivery ?? How will Option D be better over B ?
upvoted 1 times
rocky48
2 years, 4 months ago
1 or 20 Fan-out, it'll still be the same latency from message ingestion to delivery. Right ??
upvoted 1 times
...
...
...
mawsman
Highly Voted 2 years, 1 month ago
Selected Answer: B
Enhanced fan-out is an Amazon Kinesis Data Streams feature that enables consumers to receive records from a data stream with dedicated throughput of up to 2 MB of data per second per shard. When using Lambda as an enhanced fan-out consumer, you can use the Event Source Mapping Parallelization Factor to have one Lambda pull from one shard concurrently with up to 10 parallel invocations per consumer. A consumer that uses enhanced fan-out doesn't have to contend with other consumers that are receiving data from the stream. our data cap is up to 500kb, meaning we have the ability to pull at least 4 messages per shard per second. The 10 parralel invocations would mean that we can call up to 10 shards in parallel and thus consume a minimum of 40 messages at the maximum message size as defined in the question. Thus, one lambda enhanced fanout consumer should be enough to route to 20 destinations concurrently and 20 lambdas would be waaay too many, hence B
upvoted 8 times
rlnd2000
1 year, 6 months ago
What about ordering? With option B will be very difficult to maintain the order in case of a failure, timeout I think D is a better option
upvoted 1 times
TheEnquirer
1 year ago
When using Lambda as an enhanced fan-out consumer, you can use the Event Source Mapping Parallelization Factor to have one Lambda pull from one shard concurrently with up to 10 parallel invocations per consumer. Each parallelized invocation contains messages with the same partition key (HotelCode) and maintains order . https://aws.amazon.com/vi/blogs/architecture/field-notes-how-to-scale-opentravel-messaging-architecture-with-amazon-kinesis-data-streams/
upvoted 1 times
...
...
...
tsangckl
Most Recent 1 year, 1 month ago
Selected Answer: D
Bing said answer is D :)
upvoted 2 times
...
roymunson
1 year, 5 months ago
I've heard that the enhanced fan out feature only works for http/2.
upvoted 1 times
...
michalf84
1 year, 5 months ago
I am confused with lambda payload limit that is 256kb thhat is greater than message size
upvoted 1 times
...
rlnd2000
1 year, 6 months ago
Selected Answer: D
In my opinion this two requirements make me select D "The performance of a single destination HTTP endpoint should not impact the performance of the delivery for other destinations." This requirement underscores the need for a design that isolates the message delivery paths to different endpoints. If a single Lambda function were responsible for all destinations, any slow response or failure at one endpoint could cause delays or retries that would affect the entire function, impacting the delivery performance for other endpoints. This is undesirable because the behavior or performance of one endpoint should not influence the others and Message Ordering, Each function maintains the order for its own endpoint, reducing complexity and the risk of out-of-order delivery that might arise when one function handles multiple endpoints.
upvoted 2 times
...
wally_1995
1 year, 9 months ago
B does not address the requirement: The performance of a single destination HTTP endpoint should not impact the performance of the delivery for other destinations. As one single lambda would send http put requests one after the other. 20 independent destinations = 20 consumers so I say D
upvoted 2 times
wally_1995
1 year, 9 months ago
Also with one lambda you get 10 concurrent lambdas to consume from the same shard, with 20 lambdas you get 200 concurrent lambdas for the entire stream this would process the stream way faster and reduce latency. Question does not mention costs.
upvoted 1 times
...
...
pk349
1 year, 11 months ago
B: I passed the test
upvoted 2 times
...
rsn
2 years, 1 month ago
Selected Answer: D
The key requirement is "The performance of a single destination HTTP endpoint should not impact the performance of the delivery for other destinations."
upvoted 7 times
...
Ashas
2 years, 1 month ago
Answer: D I think it's D because https://aws.amazon.com/about-aws/whats-new/2018/11/aws-lambda-supports-kinesis-data-streams-enhanced-fan-out-and-http2/ enhanced-fan-out with one lambda doesn't makes sense and logically incorrect statement.
upvoted 2 times
...
Arjun777
2 years, 3 months ago
KDS for near real time ? for low size .. KDS is for real time and min 10GB of data streaming ?? considering its near real time - shouldnt the answer be firehose ?
upvoted 1 times
...
Chelseajcole
2 years, 3 months ago
Selected Answer: B
The question is: does 20 HTTP Endpoint destinations considered as one consumer or 20 consumers? Fan-out lambda can support up to 5 consumers so 20 HTTPS endpoints consider 20 consumers, you will need to establish at least 4 Fan-out lambda functions, which rule out B. However, these 20 HTTP endpoints should consider as one consumer, they are going to the same place and this place has 20 endpoints. So one Fan-out should be good enough instead of 20. Link: https://aws.amazon.com/blogs/compute/increasing-real-time-stream-processing-performance-with-amazon-kinesis-data-streams-enhanced-fan-out-and-aws-lambda/
upvoted 4 times
...
henom
2 years, 4 months ago
Ans- B Lambda enhanced fan-out consumers Enhanced fan-out consumers can increase the per shard read consumption throughput through event-based consumption, reduce latency with parallelization, and support error handling. The enhanced fan-out consumer increases the read capacity of consumers from a shared 2 MB per second, to a dedicated 2 MB per second for each consumer. When using Lambda as an enhanced fan-out consumer, you can use the Event Source Mapping Parallelization Factor to have one Lambda pull from one shard concurrently with up to 10 parallel invocations per consumer. Each parallelized invocation contains messages with the same partition key (HotelCode) and maintains order. The invocations complete each message before processing with the next parallel invocation. Figure 8: Lambda fan-out consumers with parallel invocations, maintaining order
upvoted 3 times
...
rudramadhu
2 years, 8 months ago
Answer B - https://aws.amazon.com/blogs/compute/increasing-real-time-stream-processing-performance-with-amazon-kinesis-data-streams-enhanced-fan-out-and-aws-lambda/ refer "Enhanced fan-out with Lambda functions"
upvoted 4 times
akashm99101001com
2 years, 1 month ago
The blog has a comparison that uses 3 lambda function in EFO. Comparing methods To demonstrate the advantage of Kinesis Data Streams enhanced fan-out, I built an application with a single shard stream. It has three Lambda functions connected using the standard method and three Lambda functions connected using enhanced fan-out for consumers. I created roughly 76 KB of dummy data and inserted it into the stream at 1,000 records per second. After four seconds, I stopped the process, leaving a total of 4,000 records to be processed. Means we need 20 EOF, max limit for EOF is 20 so that satisfies the max https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html
upvoted 2 times
...
...
arboles
2 years, 8 months ago
I have concerns regarding B, 1 lamda function processing data and sending to 20 HTTP endpoints seems bit complicated and fan out for 1 lambda seems none necessary I think. From the other side in D haveing 20 lambdas that send to each point puts unnecesary load on kinesis, I think good case would be to have severl lambdas , for example 5 , each for 4 endpoints
upvoted 3 times
...
rocky48
2 years, 9 months ago
Selected Answer: B
Answer-B
upvoted 2 times
...
Alekx42
2 years, 9 months ago
Why would you want to use enhanced fan out consumer in answer B? there is only a single consumer and there are no benefits in using this feature. In answer D, using 20 lambda functions is a waste, but with enhanced consumer you can be sure that you will have not throttling problems.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago