The statement in option D is claiming the exact opposite of what is true - it's stating that JS is more robust with large datasets when in fact KS is the more robust choice for large datasets.
This is a key advantage of using Jensen-Shannon divergence. It produces a value between 0 and 1, which represents the divergence between two distributions. This value can be interpreted without needing to set arbitrary thresholds or cutoffs. In contrast, the KS test involves comparing the test statistic to a critical value, which can depend on the significance level chosen.
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
clachevv
4 days, 18 hours agosaha3333
1 month, 3 weeks agoJackeyquan
7 months, 1 week agojames_donquixote
8 months, 2 weeks agoThoBustos
9 months agoAlishahab70
12 months ago