Your organization manages an online message board. A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive. Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do?
f084277
1 week, 1 day agoDirtie_Sinkie
2 months agoDirtie_Sinkie
2 months agof084277
1 week, 1 day agobaimus
2 months, 1 week agoAzureDP900
5 months agoAzureDP900
5 months agoSimple_shreedhar
5 months, 4 weeks agogscharly
7 months agopinimichele01
7 months, 1 week ago7cb0ab3
7 months, 2 weeks agoedoo
8 months, 2 weeks agodaidai75
9 months, 3 weeks agotavva_prudhvi
1 year, 3 months agopowerby35
1 year, 4 months ago[Removed]
1 year, 4 months agotavva_prudhvi
1 year agoPST21
1 year, 4 months ago