An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms. What should the firm do when developing and deploying the LLM? (Choose two.)
A. no need for fairness metrics as the use case is for document processing
C. modifying the training data means there is a re-training of the model. not needed for this use case
D. there is no re-training needed for this case. avoiding overfitting is also not needed
A. Include fairness metrics for model evaluation: Fairness metrics help ensure that the model is not biased against any particular group. This is especially important in fields like accounting, where any biases in automated decisions could lead to unethical outcomes. Fairness metrics provide insight into how well the model treats all data groups equally.
C. Modify the training data to mitigate bias: Adjusting the training data to address any identified biases is crucial for developing responsible AI applications. This can involve balancing the dataset or removing biased samples, ensuring the model generalizes fairly across different data types and groups.
upvoted 2 times
...
Log in to ExamTopics
Sign in:
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
KawtarZ
2 weeks, 1 day agojove
2 months, 1 week ago