Your organization manages an online message board. A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive. Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do?
Omi_04040
1 week, 4 days agoLaur_C
1 week, 4 days agorajshiv
2 weeks, 5 days agof084277
1 month, 1 week agoDirtie_Sinkie
3 months, 1 week agoDirtie_Sinkie
3 months, 1 week agof084277
1 month, 1 week agobaimus
3 months, 1 week agoAzureDP900
6 months agoAzureDP900
6 months agoSimple_shreedhar
6 months, 4 weeks agogscharly
8 months agopinimichele01
8 months, 2 weeks ago7cb0ab3
8 months, 2 weeks agoedoo
9 months, 3 weeks agodaidai75
10 months, 3 weeks agotavva_prudhvi
1 year, 5 months agopowerby35
1 year, 5 months ago[Removed]
1 year, 5 months agotavva_prudhvi
1 year, 1 month agoPST21
1 year, 5 months ago