exam questions

Exam AWS Certified AI Practitioner AIF-C01 All Questions

View all questions & answers for the AWS Certified AI Practitioner AIF-C01 exam

Exam AWS Certified AI Practitioner AIF-C01 topic 1 question 71 discussion

An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms.
What should the firm do when developing and deploying the LLM? (Choose two.)

  • A. Include fairness metrics for model evaluation.
  • B. Adjust the temperature parameter of the model.
  • C. Modify the training data to mitigate bias.
  • D. Avoid overfitting on the training data.
  • E. Apply prompt engineering techniques.
Show Suggested Answer Hide Answer
Suggested Answer: AC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
kopper2019
2 weeks, 2 days ago
A. Include fairness metrics for model evaluation. C. Modify the training data to mitigate bias.
upvoted 1 times
...
Jessiii
2 weeks, 6 days ago
Selected Answer: AC
A. Include fairness metrics for model evaluation: Fairness metrics help evaluate whether the model treats all groups fairly and does not introduce harmful bias. When developing an LLM for document processing, it's essential to assess and mitigate any potential biases in the model's outputs to ensure that it operates responsibly and equitably, especially when the model interacts with sensitive data. C. Modify the training data to mitigate bias: Bias in AI models often originates from biased training data. Modifying the training data to ensure it is representative and free from harmful biases is a crucial step in reducing the risk of the model producing unfair or harmful outcomes. This proactive approach is essential for responsible deployment.
upvoted 1 times
...
dspd
1 month ago
Selected Answer: AC
A: Include fairness metrics for model evaluation Critical for responsible AI implementation Helps identify discriminatory patterns Ensures equitable treatment across different groups Allows for continuous monitoring of fairness Essential for an accounting firm handling sensitive financial data C: Modify the training data to mitigate bias Addresses bias at the source Ensures representative training data Helps prevent discriminatory outcomes Critical for fair treatment of all clients Fundamental to responsible AI development
upvoted 1 times
...
KawtarZ
1 month, 3 weeks ago
Selected Answer: BE
A. no need for fairness metrics as the use case is for document processing C. modifying the training data means there is a re-training of the model. not needed for this use case D. there is no re-training needed for this case. avoiding overfitting is also not needed
upvoted 1 times
...
jove
3 months, 3 weeks ago
Selected Answer: AC
A. Include fairness metrics for model evaluation: Fairness metrics help ensure that the model is not biased against any particular group. This is especially important in fields like accounting, where any biases in automated decisions could lead to unethical outcomes. Fairness metrics provide insight into how well the model treats all data groups equally. C. Modify the training data to mitigate bias: Adjusting the training data to address any identified biases is crucial for developing responsible AI applications. This can involve balancing the dataset or removing biased samples, ensuring the model generalizes fairly across different data types and groups.
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago