exam questions

Exam AWS Certified Security - Specialty SCS-C02 All Questions

View all questions & answers for the AWS Certified Security - Specialty SCS-C02 exam

Exam AWS Certified Security - Specialty SCS-C02 topic 1 question 55 discussion

A company is using AWS to run a long-running analysis process on data that is stored in Amazon S3 buckets. The process runs on a fleet of Amazon EC2 instances that are in an Auto Scaling group. The EC2 instances are deployed in a private subnet of a VPC that does not have internet access. The EC2 instances and the S3 buckets are in the same AWS account.
The EC2 instances access the S3 buckets through an S3 gateway endpoint that has the default access policy. Each EC2 instance is associated with an instance profile role that has a policy that explicitly allows the s3:GetObject action and the s3:PutObject action for only the required S3 buckets.
The company learns that one or more of the EC2 instances are compromised and are exfiltrating data to an S3 bucket that is outside the company's organization in AWS Organizations. A security engineer must implement a solution to stop this exfiltration of data and to keep the EC2 processing job functional.
Which solution will meet these requirements?

  • A. Update the policy on the S3 gateway endpoint to allow the S3 actions only if the values of the aws:ResourceOrgID and aws:PrincipalOrgID condition keys match the company's values.
  • B. Update the policy on the instance profile role to allow the S3 actions only if the value of the aws:ResourceOrgID condition key matches the company's value.
  • C. Add a network ACL rule to the subnet of the EC2 instances to block outgoing connections on port 443.
  • D. Apply an SCP on the AWS account to allow the S3 actions only if the values of the aws:ResourceOrgID and aws:PrincipalOrgID condition keys match the company's values.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
kejam
Highly Voted 1 year ago
Selected Answer: D
Answer D based on the syntax of these answers. A. This could work, but you don't need aws:ResourceOrgID and aws:PrincipalOrgID You can add allowed buckets (internal or external) as needed which is much more flexible IMO. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#edit-vpc-endpoint-policy-s3 B. This doesn't prevent S3 actions on external accounts. C. This does nothing as the S3 endpoint is inside the VPC. D. This solution matches the answer exactly. Example 3: https://aws.amazon.com/blogs/security/how-to-control-access-to-aws-resources-based-on-aws-account-ou-or-organization/
upvoted 11 times
NoCrapEva
9 months, 4 weeks ago
Also the question states the company has AWS Organisations - therefore any policy restrictions SHOULD be done at the Organisation level - In this case with a SCP
upvoted 1 times
...
AgboolaKun
1 year ago
I agree totally. I have always thought that D is the correct answer but I could not locate any supported documentation online. Thank you for providing the link. The example 3 in the link as you pointed out tallies with the scenario in this question.
upvoted 1 times
...
Raphaello
9 months, 3 weeks ago
In fact Example 3 Restrict access to AWS resources (in this case S3) within my organization, which means denying access from principals (e.g. EC2 instance roles) that do not belong to S3 Org. That example does not correspond to what we need to do here! "Deny", "Action": "s3:*", "Resource": "arn:aws:s3:::*/*", "Condition": { "StringNotEquals": { "aws:ResourceOrgID": "${aws:PrincipalOrgID}"} Note the "PrincipalOrgID" is a variable. Whereas, we basically want our own EC2 instances not to access S3 that belong to another account. "Allow", "Action": "s3:*", "Resource": "arn:aws:s3:::*/*", "Condition": {"StringEquals": {"aws:PrincipalOrgID": [ "o-yyyyyyyyyy" ]} Or maybe even add an explicit deny statement if the "aws:ResourceOrgID" does not equal my Org ID "0-yyyyyyyyyy".
upvoted 1 times
...
...
1c7c461
Highly Voted 11 months, 3 weeks ago
Selected Answer: B
The answer is B. You all missed the part that EC2 instance is compromised. The restriction has to be added to the instance profile of the ec2 instance to restrict which S3 buckets it can connect to. This question is about limiting access from EC2 to external S3 buckets.
upvoted 6 times
...
IPLogic
Most Recent 4 days, 3 hours ago
Selected Answer: A
To stop the exfiltration of data from compromised EC2 instances to external S3 buckets while keeping the EC2 processing job functional, the best solution is option A: Update the policy on the S3 gateway endpoint to allow the S3 actions only if the values of the aws:ResourceOrgID and aws:PrincipalOrgID condition keys match the company's values. This approach ensures that only resources and principals within the company's AWS Organization can perform S3 actions, effectively blocking any attempts to exfiltrate data to S3 buckets outside the organization. By updating the S3 gateway endpoint policy, you can enforce this restriction at the network level, providing a robust and centralized control mechanism
upvoted 1 times
...
mzeynalli
4 weeks ago
Selected Answer: A
Using Service Control Policies (SCPs) can be a part of a broader security strategy, but there are specific reasons why it may not be the most effective or immediate solution for the scenario where EC2 instances are compromised and exfiltrating data to an S3 bucket outside the company's organization: 1. Scope of SCPs: SCPs are designed to manage permissions across AWS Organizations. They apply to AWS accounts rather than individual resources. If the compromised instances are operating under an IAM role that has certain permissions, an SCP may not have the granularity needed to effectively restrict access at the resource level.
upvoted 1 times
...
jakie22332
1 month ago
Selected Answer: B
https://aws.amazon.com/blogs/security/how-to-control-access-to-aws-resources-based-on-aws-account-ou-or-organization/
upvoted 1 times
...
icecool36
7 months ago
Selected Answer: D
SCP is the right choice. Not B: This is only effective if the nodes are replaced. The processing must not be interupted. Not A: THis does not do anything against the exfiltration Not C: THis will not work
upvoted 1 times
...
9bb8cb3
7 months, 2 weeks ago
Selected Answer: A
Blocking at the network layer would allow you still have other workloads that can talk to other buckets outside of the account whereas the option D as others have suggested would mean no workload in the account would ever be able to talk to a bucket outside the org which is IMO too restrictive and the solution is not asking for a general solution just to this specific problem. You can also create additional VPC endpoints and bound them to other route tables which don't have this policy as to allow for other architectural possibilities mentioned above,
upvoted 3 times
...
ion_gee
8 months, 1 week ago
The correct answer should be B, as it directly addresses the issue. Option D seems too broad, and might affect other roles in the Account.
upvoted 2 times
...
Noexperience
9 months, 1 week ago
Selected Answer: B
"Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::bucket-name/*", // Specific buckets to restrict "arn:aws:s3:::another-bucket/*" ], "Condition": { "StringEquals": { "aws:ResourceOrgID": "YOUR_AWS_ORGANIZATION_ID" } } } ] }
upvoted 2 times
...
bkbaws
9 months, 1 week ago
Selected Answer: A
the EC2 role S3 GET/PUT are restricted to the appropriate buckets, so the exfiltration bucket access is being granted by the default S3 gateway resource policy. Hence restricting the EC2 attached IAM role to the given organization (B) will do nothing and B is incorrect. C would break everything. For (D) - SCPs don't apply to resource policies, so exfiltration would continue through the S3 gateway. Answer is A
upvoted 1 times
...
Raphaello
10 months ago
Selected Answer: B
The problem is that EC2 instance exfiltrating data to an S3 bucket that is outside the company's organization in AWS Organizations. So we need to make sure those instance cannot put the data to an external account's bucket. Therefore, we need to restrict access ONLY to resources within an organization using condition "aws:ResourceOrgID". Remember, it is not about controlling access to our own S3 bucket. It is about stopping EC2 instances from exfiltrate our data to accounts outside our Org. Option B is the correct answer.
upvoted 1 times
...
LazyAutonomy
10 months ago
Selected Answer: A
Answer is A. D is wrong because attackers wont use EC2 instance credentials to exfil data - no attacker is that stupid.
upvoted 2 times
LazyAutonomy
10 months ago
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#edit-vpc-endpoint-policy-s3 https://developer.squareup.com/blog/adopting-aws-vpc-endpoints-at-square/
upvoted 1 times
...
...
mark16dc
10 months ago
Given the effectiveness and direct impact on preventing data exfiltration to external S3 buckets, Option D is the correct solution. It leverages the organizational control provided by AWS Organizations to enforce policy restrictions at the account level, ensuring that S3 actions are confined to the company's organizational boundaries, thus meeting the security requirements without disrupting the EC2 processing jobs.
upvoted 1 times
...
RNan
11 months, 1 week ago
Answer: B The compromised EC2 instances are exfiltrating data to an S3 bucket outside the company's organization. By updating the policy on the instance profile role, you can restrict the S3 actions to only allow access to the required S3 buckets within the company's organization.
upvoted 1 times
...
Daniel76
11 months, 1 week ago
Selected Answer: D
Between A and D, A must be ruled out because: "An endpoint policy does not override or replace identity-based policies or resource-based policies. " So, either the compromised ec2 instance or the external s3 can override the endpoint policy. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html
upvoted 1 times
...
DebbieB67
11 months, 2 weeks ago
Selected Answer: D
Answer D
upvoted 1 times
...
yorkicurke
11 months, 3 weeks ago
Selected Answer: A
This ensures that only resources from within the company's AWS Organization can access the S3 bucket through the endpoint. This prevents any exfiltration of data from compromised EC2 instances to external S3 buckets, while STILL allowing the processing job on the instances to function normally by accessing the company's internal S3 resources through the private endpoint. https://repost.aws/questions/QU2Qx3s51DQ9SyrlWueh9L_Q/restrict-access-to-s3-bucket
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago