exam questions

Exam AWS Certified AI Practitioner AIF-C01 All Questions

View all questions & answers for the AWS Certified AI Practitioner AIF-C01 exam

Exam AWS Certified AI Practitioner AIF-C01 topic 1 question 86 discussion

Which prompting attack directly exposes the configured behavior of a large language model (LLM)?

  • A. Prompted persona switches
  • B. Exploiting friendliness and trust
  • C. Ignoring the prompt template
  • D. Extracting the prompt template
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
kopper2019
2 weeks ago
D. Extracting the prompt template
upvoted 1 times
...
Jessiii
2 weeks, 6 days ago
Selected Answer: D
Extracting the prompt template refers to a situation where the attacker tries to reveal or access the underlying structure or instructions used to configure the behavior of the large language model (LLM). This type of attack can expose how the model has been trained or how it responds to certain inputs, effectively giving the attacker insight into how the LLM has been directed to generate responses. This type of attack could potentially lead to misuse, such as causing the model to behave in unintended ways, or even allow an attacker to manipulate the behavior of the model by crafting specific inputs based on the extracted prompt template.
upvoted 1 times
...
dspd
1 month ago
Selected Answer: D
D. Extracting the prompt template
upvoted 1 times
...
AzureDP900
1 month, 1 week ago
Selected Answer: B
B. Exploiting friendliness and trust Exploiting friendliness and trust involves manipulating the LLM to respond in a way that appears friendly or trustworthy, potentially causing it to deviate from its intended behavior. This type of attack directly exposes how the LLM has been configured to interact with users, often leading it to provide information or make decisions that align more closely with the attacker's intentions rather than its original programming.
upvoted 1 times
...
Moon
2 months ago
Selected Answer: D
D: Extracting the prompt template Explanation: Extracting the prompt template is a prompting attack where an attacker intentionally crafts inputs to reveal the underlying configuration or instructions (prompt template) used to guide the large language model (LLM). This exposes the internal behavior or design of the model, potentially revealing sensitive or proprietary information about how the LLM is configured. Why not the other options? A: Prompted persona switches: This attack involves manipulating the LLM to adopt a different persona or role than intended but does not directly expose the prompt template.
upvoted 2 times
...
aws_Tamilan
2 months ago
Selected Answer: D
D. Extracting the prompt template Explanation: Extracting the prompt template is a prompting attack where the attacker directly attempts to reveal the underlying configured behavior or instructions of the large language model (LLM). This can expose sensitive configurations, system instructions, or contextual prompts that guide the model's behavior.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago