exam questions

Exam AI-102 All Questions

View all questions & answers for the AI-102 exam

Exam AI-102 topic 8 question 5 discussion

Actual exam question from Microsoft's AI-102
Question #: 5
Topic #: 8
[All AI-102 Questions]

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure subscription that contains an Azure OpenAI resource named AI1 and an Azure AI Content Safety resource named CS1.

You build a chatbot that uses AI1 to provide generative answers to specific questions and CS1 to check input and output for objectionable content.

You need to optimize the content filter configurations by running tests on sample questions.

Solution: From Content Safety Studio, you use the Protected material detection feature to run the tests.

Does this meet the requirement?

  • A. Yes
  • B. No
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
chrillelundmark
3 days, 20 hours ago
Selected Answer: B
https://learn.microsoft.com/en-us/azure/ai-services/content-safety/overview#product-features
upvoted 1 times
...
a8da4af
1 month, 1 week ago
B is Correct , here's what chatGPT says: The answer is B. No. Here’s why: The Protected material detection feature in Content Safety Studio is typically designed to detect and monitor sensitive or protected content, but it may not directly apply to optimizing the filter configurations for a chatbot. To test content filtering settings specifically for objectionable or harmful content in AI-generated responses, you would likely use Content Safety’s standard filtering features directly on the input and output of the chatbot, rather than the Protected material detection feature. For optimizing content filters, you would test using sample questions and responses in Content Safety Studio’s primary filtering and moderation tools, where you can review and adjust configurations.
upvoted 1 times
...
Slapp1n
1 month, 2 weeks ago
Selected Answer: A
The answer should be Yes: The solution involves using the "Protected material detection" feature from Content Safety Studio to optimize content filter configurations by running tests on sample questions. This approach meets the requirement for testing and optimizing content safety configurations for generative AI output, ensuring that objectionable content is properly detected and managed.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago