exam questions

Exam AWS Certified Solutions Architect - Professional All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional exam

Exam AWS Certified Solutions Architect - Professional topic 1 question 893 discussion

A flood monitoring agency has deployed more than 10,000 water-level monitoring Sensors. Sensors send continuous data updates, and each update is less than
1 MB in size. The agency has a fleet of on-premises application servers. These servers receive updates from the sensors, convert the raw data into a human readable format, and write the results to an on-premises relational database server. Data analysts then use simple SQL queries to monitor the data.
The agency wants to increase overall application availability and reduce the effort that is required to perform maintenance tasks. These maintenance tasks, which include updates and patches to the application servers, cause downtime. While an application server is down, data is lost from sensors because the remaining servers cannot handle the entire workload.
The agency wants a solution that optimizes operational overhead and costs. A solutions architect recommends the use of AWS IoT Core to collect the sensor data.
What else should the solutions architect recommend to meet these requirements?

  • A. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to .csv format, and insert it into an Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly from the DB instance.
  • B. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function to read the Kinesis Data Firehose data, convert it to Apache Parquet format, and save it to an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.
  • C. Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to .csv format and store it in an Amazon S3 bucket. Import the data into an Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly from the DB instance.
  • D. Send the sensor data to an Amazon Kinesis Data Analytics application to convert the data to Apache Parquet format and store it in an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon Athena.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
pinhead900
Highly Voted 2 years, 2 months ago
Selected Answer: B
"The agency wants to increase overall application availability and reduce the effort that is required to perform maintenance tasks" -> B
upvoted 7 times
...
spencer_sharp
Most Recent 8 months, 1 week ago
Why option D is wrong?
upvoted 1 times
...
dcdcdc3
2 years, 2 months ago
Selected Answer: B
The closest I could find. Not Lambda, rather Glue there but still https://aws.amazon.com/blogs/big-data/analyzing-apache-parquet-optimized-data-using-amazon-kinesis-data-firehose-amazon-athena-and-amazon-redshift/
upvoted 2 times
...
Trump2022
2 years, 2 months ago
I like B
upvoted 1 times
...
gnandam
2 years, 2 months ago
B- Apache Parquet is a incredibly versatile open source columnar storage format. It is 2x faster to unload and takes up 6x less storage in Amazon S3 as compared to text formats. It also allows you to save the Parquet files in Amazon S3 as an open format with all data transformation and enrichment carried out in Amazon Redshift. Amazon Athena can be used for object metadata Parquet is a self-describing format and the schema or structure is embedded in the data itself therefore it is not possible to track the data changes in the file. To track the changes, you can use Amazon Athena to track object metadata across Parquet files as it provides an API for metadata.
upvoted 1 times
...
SGES
2 years, 3 months ago
B - better realistic in my opinion
upvoted 1 times
cale
2 years, 2 months ago
Option B does not satisfy these requirements though: 1. convert the raw data into a human readable format, and 2. write the results to an on-premises relational database server.
upvoted 1 times
pinhead900
2 years, 2 months ago
those are not the requirements, the actual requirement is: "The agency wants to increase overall application availability and reduce the effort that is required to perform maintenance tasks" So Option B is right. Additionally you cannot directly load csv data into aurora, you need to have it uploaded into S3 first: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html
upvoted 1 times
...
...
cale
2 years, 2 months ago
I actually like option B and it is how I will do it but those two requirements (at least how I interpret them as requirements) are throwing me off a bit. It's just one of those questions that is tricky but you actually know what to do in real life.
upvoted 1 times
...
...
cale
2 years, 3 months ago
Selected Answer: A
I will go with A because it satisfies the requirements.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...