An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms. What should the firm do when developing and deploying the LLM? (Choose two.)
A. Include fairness metrics for model evaluation: Fairness metrics help evaluate whether the model treats all groups fairly and does not introduce harmful bias. When developing an LLM for document processing, it's essential to assess and mitigate any potential biases in the model's outputs to ensure that it operates responsibly and equitably, especially when the model interacts with sensitive data.
C. Modify the training data to mitigate bias: Bias in AI models often originates from biased training data. Modifying the training data to ensure it is representative and free from harmful biases is a crucial step in reducing the risk of the model producing unfair or harmful outcomes. This proactive approach is essential for responsible deployment.
A: Include fairness metrics for model evaluation
Critical for responsible AI implementation
Helps identify discriminatory patterns
Ensures equitable treatment across different groups
Allows for continuous monitoring of fairness
Essential for an accounting firm handling sensitive financial data
C: Modify the training data to mitigate bias
Addresses bias at the source
Ensures representative training data
Helps prevent discriminatory outcomes
Critical for fair treatment of all clients
Fundamental to responsible AI development
A. no need for fairness metrics as the use case is for document processing
C. modifying the training data means there is a re-training of the model. not needed for this use case
D. there is no re-training needed for this case. avoiding overfitting is also not needed
A. Include fairness metrics for model evaluation: Fairness metrics help ensure that the model is not biased against any particular group. This is especially important in fields like accounting, where any biases in automated decisions could lead to unethical outcomes. Fairness metrics provide insight into how well the model treats all data groups equally.
C. Modify the training data to mitigate bias: Adjusting the training data to address any identified biases is crucial for developing responsible AI applications. This can involve balancing the dataset or removing biased samples, ensuring the model generalizes fairly across different data types and groups.
upvoted 3 times
...
Log in to ExamTopics
Sign in:
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
kopper2019
2 weeks, 2 days agoJessiii
2 weeks, 6 days agodspd
1 month agoKawtarZ
1 month, 3 weeks agojove
3 months, 3 weeks ago