exam questions

Exam AWS Certified AI Practitioner AIF-C01 All Questions

View all questions & answers for the AWS Certified AI Practitioner AIF-C01 exam

Exam AWS Certified AI Practitioner AIF-C01 topic 1 question 123 discussion

Which technique can a company use to lower bias and toxicity in generative AI applications during the post-processing ML lifecycle?

  • A. Human-in-the-loop
  • B. Data augmentation
  • C. Feature engineering
  • D. Adversarial training
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Jessiii
2 weeks, 6 days ago
Selected Answer: A
Involves human oversight during the post-processing phase to review and mitigate biased or toxic outputs generated by AI models.
upvoted 1 times
...
Moon
2 months ago
Selected Answer: A
The question specifies reducing bias and toxicity during post-processing of generated content. A. Human-in-the-loop: This is the correct answer. Human review of generated outputs allows for filtering or modification of biased or toxic content after generation. B. Data augmentation: This occurs during training, modifying the training data itself, not the generated outputs. C. Feature engineering: Also a training phase activity, focusing on input features, not generated content. D. Adversarial training: Used during training to improve robustness, not to filter post-generation content
upvoted 2 times
...
may2021_r
2 months ago
Selected Answer: A
The correct answer is A. Human-in-the-loop review provides direct oversight for reducing bias and toxicity.
upvoted 1 times
...
aws_Tamilan
2 months ago
Selected Answer: A
A. Human-in-the-loop Explanation: Human-in-the-loop (HITL) is a technique used in the post-processing stage of the machine learning lifecycle to improve model performance, including reducing bias and toxicity. In HITL, human evaluators intervene to assess and refine model outputs. This feedback loop helps to identify and correct biases, toxic language, or other undesirable outputs before they are presented to end-users. It ensures that the AI system adheres to ethical guidelines and improves the quality of generated content.
upvoted 1 times
...
ap6491
2 months, 1 week ago
Selected Answer: A
Human-in-the-loop (HITL) involves incorporating human reviewers into the model’s post-processing workflow to evaluate and refine outputs generated by the AI. This approach helps identify and reduce bias or toxic content by leveraging human judgment to assess and correct inappropriate or inaccurate results. HITL is particularly useful in generative AI applications where outputs can be subjective and require nuanced review to align with ethical and business standards.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago