exam questions

Exam AI-900 All Questions

View all questions & answers for the AI-900 exam

Exam AI-900 topic 1 question 3 discussion

Actual exam question from Microsoft's AI-900
Question #: 3
Topic #: 1
[All AI-900 Questions]

HOTSPOT -
You are developing a model to predict events by using classification.
You have a confusion matrix for the model scored on test data as shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:

Show Suggested Answer Hide Answer
Suggested Answer:
Box 1: 11 -

TP = True Positive.
The class labels in the training set can take on only two possible values, which we usually refer to as positive or negative. The positive and negative instances that a classifier predicts correctly are called true positives (TP) and true negatives (TN), respectively. Similarly, the incorrectly classified instances are called false positives (FP) and false negatives (FN).

Box 2: 1,033 -

FN = False Negative -
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/studio/evaluate-model-performance

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Vijaya
Highly Voted 4 years, 2 months ago
Answer 2 is also correct please refer the https://docs.microsoft.com/en-us/azure/machine-learning/classic/evaluate-model-performance
upvoted 25 times
...
denizej
Highly Voted 4 years, 1 month ago
Answer 2 is correct. The grid used in the question is reversed to the MS documentation. A false negative would imply that a 0 was predicted but 1 was the actual outcome, of which there were 1033 occurrences according to the grid used in the question.
upvoted 20 times
...
M2000F007fubar
Most Recent 4 weeks ago
To evaluate how well the model is performing, we use a confusion matrix. This matrix summarizes how many times the model made each type of prediction. It categorizes predictions into four groups: True Positives (TP): These are cases where the model correctly predicts a positive outcome. For example, if an email is spam, and the model correctly identifies it as spam. True Negatives (TN): These are cases where the model correctly predicts a negative outcome. For example, if an email is not spam, and the model correctly identifies it as not spam. False Positives (FP): These are cases where the model incorrectly predicts a positive outcome. For example, if an email is not spam, but the model mistakenly labels it as spam. This is also called a "Type I error." False Negatives (FN): These are cases where the model incorrectly predicts a negative outcome. For example, if an email is spam, but the model mistakenly labels it as not spam. This is also called a "Type II error."
upvoted 1 times
...
kim21
1 month, 3 weeks ago
Actual: 1 0 Predicted: 1 11 5 0 1033 13951 X-axis (top): Represents the Actual values. 1: Represents cases where the actual label is 1 (positive). 0: Represents cases where the actual label is 0 (negative). Y-axis (left): Represents the Predicted values. 1: Represents cases where the model predicted 1 (positive). 0: Represents cases where the model predicted 0 (negative). This matrix shows: True Positives (TP): 11 (Predicted 1, Actual 1) – Model correctly predicted "1". False Positives (FP): 5 (Predicted 1, Actual 0) – Model predicted "1" when it should have been "0". False Negatives (FN): 1033 (Predicted 0, Actual 1) – Model predicted "0" when it should have been "1". True Negatives (TN): 13951 (Predicted 0, Actual 0) – Model correctly predicted "0".
upvoted 1 times
...
OwlDay
2 months, 2 weeks ago
The confusion matrix for the model scored on test data reveals the following: - **True Positives (Correctly Predicted Positives)**: 11 - **False Negatives**: 5 This information is crucial for evaluating the model's performance, especially in a classification task. The true positives represent instances where the model correctly predicted the positive class, while false negatives indicate instances where the model failed to predict the positive class when it should have. Understanding these metrics helps assess the model's accuracy and effectiveness in identifying positive cases. Source: Conversation with Copilot, 27/6/2024
upvoted 2 times
...
lightecho037
2 months, 2 weeks ago
No matter the axis labels (X and Y), the positions of TP, FP, FN, and TN remain consistent within the matrix. Actual Predicted 1 0 1 11 5 0 1033 13951 Interpretation: True Positives (TP): Predicted 1, Actual 1 — Value: 11 False Positives (FP): Predicted 1, Actual 0 — Value: 5 False Negatives (FN): Predicted 0, Actual 1 — Value: 1033 True Negatives (TN): Predicted 0, Actual 0 — Value: 13951
upvoted 2 times
...
Jay23AmMonsIV
6 months, 1 week ago
In Question, Actual is on X-axis and Predicted is on 'Y' axis So answer will be 11 and 1033 TP = 11 TN = 13951 FP = 5 FN = 1033 In Solution, Actual Values are on 'Y-axis' and Predicted Values on 'X-axis' - Thus table get changed.
upvoted 3 times
...
Tasoh
8 months, 3 weeks ago
Hello team, I do not understand this can someone explain why the answer is 11:1033? I do not understand the logic. Thank you
upvoted 3 times
...
Idontcareanymore
11 months, 2 weeks ago
I had a lot of difficulties with the answer for this question. However, now that I have been able to step back and look at the chart objectively, you realize that it is not a chart for determining, it a representational chart, where you are being presented the results. That is why the response is actually correct.
upvoted 1 times
...
bhd
1 year ago
Explanation chart shown is wrong Should be TP FP FN TN
upvoted 6 times
...
Vanessa23
1 year ago
11 corresponds to 0-0 1033 corresponds to 0-1
upvoted 1 times
...
MikeScout
1 year, 5 months ago
Do not agree with the response for answer 2. If we have a binary classification 'let me say person is 'tall' or 'not tall' (two classes). The confusion matrix shows the top row as row 1 / column 1 which are the true positives (correctly identified as tall). The next value in top row: row 1 and column 0 that is defined as tall (row) and 0 (contains incorrectly classified as 'not tall') and is called false negative. I read the questions that 'false negative' is what they are asking for. table: tall: correctly predicted - not tall predicted not_tall: incorrectly classified as not tall - correctly identified as not tall.
upvoted 3 times
...
Rahulkhd
1 year, 5 months ago
I have found some answer are wrong and explanation is not proper. You can use this course for proper explanation and correct answers https://www.udemy.com/course/microsoft-azure-ai-900-latest-practice-test-2023/?referralCode=25EA856C8C63BC4DE950
upvoted 1 times
...
zellck
1 year, 5 months ago
1. 11 2. 1033 https://learn.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml?view=azureml-api-2#confusion-matrix
upvoted 4 times
...
rdemontis
1 year, 6 months ago
answers and explanations are correct
upvoted 1 times
...
GargoyleFeast
2 years, 1 month ago
PP = 11 and FN = 1033
upvoted 3 times
...
BLUE_BUBBLES
2 years, 3 months ago
The Microsoft table is transposed so use rationale to think through. Answers is correct.
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...