Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 10 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 10
Topic #: 1
[All Professional Machine Learning Engineer Questions]

Your team needs to build a model that predicts whether images contain a driver's license, passport, or credit card. The data engineering team already built the pipeline and generated a dataset composed of 10,000 images with driver's licenses, 1,000 images with passports, and 1,000 images with credit cards. You now have to train a model with the following label map: [`˜drivers_license', `˜passport', `˜credit_card']. Which loss function should you use?

  • A. Categorical hinge
  • B. Binary cross-entropy
  • C. Categorical cross-entropy
  • D. Sparse categorical cross-entropy
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
ransev
Highly Voted 3 years, 5 months ago
Answer is C
upvoted 20 times
gcp2021go
3 years, 4 months ago
Use sparse categorical crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]).
upvoted 9 times
GogoG
3 years, 1 month ago
Definitely C - the target variable label formulated in the question requires a categorical cross entropy loss function i.e. 3 columns 'drivers_license' , 'passport', 'credit_card' that can take values 1, 0. Meanwhile sparse categorical cross entropy would require the labels to be integer encoded in a single vector, for example, 'drivers_license' = 1, 'passport' = 2, 'credit_card' = 3.
upvoted 8 times
dorinas
2 days, 1 hour ago
Using 1, 2 and 3 the model might interpret one value to be more important than the other which is not the case
upvoted 1 times
...
Jarek7
1 year, 4 months ago
Actually it is exactly the opposite. Your label map has 3 options which are mutually exclusive. A document cannot be both - a driver license and a passport. There is a SPARSE vector as output - only one of the categorical outputs is valid for a one example.
upvoted 1 times
Jarek7
1 year, 4 months ago
No, I'm sorry, I wrote it before checking - You were right. We use sparse categorical cross entropy when we have just an index (integer) as a label. The only difference is that it decodes the integer into one hot representation that suites to out DNN output.
upvoted 1 times
...
...
...
...
...
gcp2021go
Highly Voted 3 years, 5 months ago
answer is D https://machinelearningmastery.com/how-to-choose-loss-functions-when-training-deep-learning-neural-networks/
upvoted 10 times
ori5225
3 years, 3 months ago
Use sparse categorical crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]).
upvoted 3 times
...
giaZ
2 years, 7 months ago
Literally from the link you posted: "A possible cause of frustration when using cross-entropy with classification problems with a large number of labels is the one hot encoding process. [...] This can mean that the target element of each training example may require a one hot encoded vector with tens or hundreds of thousands of zero values, requiring significant memory. Sparse cross-entropy addresses this by performing the same cross-entropy calculation of error, without requiring that the target variable be one hot encoded prior to training". Here we have 3 categories...No problem doing one-hot encoding. Answer: C
upvoted 2 times
...
...
joqu
Most Recent 2 days, 23 hours ago
Selected Answer: D
The question clearly says "You now have to train a model with the following LABEL MAP". Label map is not one-hot encoding.
upvoted 1 times
...
jkkim_jt
1 month ago
Selected Answer: D
ㅇ Categorial Cross-Entropy for the multiple classification with one-hot-encoding labels ㅇ Sparse Categorical Cross-Entropy for the multiple classification with index labels
upvoted 1 times
...
Prakzz
4 months, 3 weeks ago
Selected Answer: D
C needs the target to be One hot encoded already. Since it is not, the answer is D
upvoted 1 times
...
PhilipKoku
5 months, 2 weeks ago
Selected Answer: C
C) Multi-Class Classification (Three or More Classes): Since you have three classes, you should use a multi-class loss function. The most common choice for multi-class image classification is categorical cross-entropy2. Categorical cross-entropy is designed for scenarios where each input belongs to exactly one class (i.e., mutually exclusive classes). Therefore, the correct answer is C. Categorical cross-entropy. It’s well-suited for multi-class classification tasks like this one. References: How to Choose Loss Functions When Training Deep Learning Neural Networks (https://machinelearningmastery.com/how-to-choose-loss-functions-when-training-deep-learning-neural-networks/) Stack Exchange: How to know which loss function is suitable for image classification? (https://datascience.stackexchange.com/questions/58138/how-to-know-which-loss-function-is-suitable-for-image-classification)
upvoted 1 times
...
gscharly
7 months ago
Selected Answer: C
I'd go with C. Categorical cross entropy is used when classes are mutually exclusive. If the number of classes was very high, then we could use sparse categorical cross entropy.
upvoted 1 times
...
pinimichele01
7 months, 1 week ago
Selected Answer: D
Use sparse categorical crossentropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical crossentropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]).
upvoted 2 times
pinimichele01
7 months ago
A. Categorical hinge : Mainly for SVM soft margins B. Binary cross-entropy : for 2 class only C. Categorical cross-entropy: Multi class but not necessarily Mutually exclusive D. Sparse categorical cross-entropy : Multi class + Mutually exclusive only , saves memory too
upvoted 2 times
...
pinimichele01
7 months ago
https://www.tensorflow.org/api_docs/python/tf/keras/losses/categorical_crossentropy https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy
upvoted 1 times
...
...
Yan_X
7 months, 3 weeks ago
Selected Answer: C
C D is for integer value instead of one-hot encoded vectors, in our question, it is 'drivers_license', 'passport', 'credit_card' one-hot.
upvoted 1 times
...
Paulus89
8 months, 3 weeks ago
Selected Answer: C
It depends on how the labels are encoded. If onehot use CCE. If its a single integer representing the class use SCCE (Source: same as in the official (wrong) answer) From the question it's not clear how the labels are encoded. But for just 3 classes there is no doubt it's better to go with one-hot encoding. Memory restrictions or a huge number of classes might point to SCCE
upvoted 1 times
...
Zwi3b3l
10 months ago
Selected Answer: D
You now HAVE TO to train a model with the following label map: [`˜drivers_license', `˜passport', `˜credit_card'].
upvoted 2 times
...
Sum_Sum
1 year ago
Selected Answer: C
If you are wondering between C & D - think about what "sparse" means It is used when dealing with hundreds of categories
upvoted 1 times
...
Sahana_98
1 year ago
Selected Answer: D
mutually exclusive classes
upvoted 1 times
...
syedsajjad
1 year, 1 month ago
In this case, we have a multi-class classification problem with three classes: driver's license, passport, and credit card. Therefore, we should use the categorical cross-entropy loss function to train our model. Sparse categorical cross-entropy is used for multi-class classification problems where the labels are represented in a sparse matrix format. This is not the case in this problem.
upvoted 2 times
...
lalala_meow
1 year, 1 month ago
Selected Answer: C
Only 3 categories of values being either T or F. They don't really need to be integer encoded, which differs sparse cross-entropy from categorical.
upvoted 1 times
...
Dan137
1 year, 2 months ago
Selected Answer: D
https://fmorenovr.medium.com/sparse-categorical-cross-entropy-vs-categorical-cross-entropy-ea01d0392d28
upvoted 1 times
Dan137
1 year, 2 months ago
categorical_crossentropy (cce) produces a one-hot array containing the probable match for each category, sparse_categorical_crossentropy (scce) produces a category index of the most likely matching category.
upvoted 1 times
...
...
Venish
1 year, 3 months ago
The correct answer is: C. Categorical cross-entropy. you are dealing with a multi-class classification problem where each image can belong to one of three classes: "driver's license," "passport," or "credit card." Categorical cross-entropy is the appropriate loss function for multi-class classification tasks. It measures the dissimilarity between the predicted class probabilities and the true class labels. It's designed to penalize larger errors in predicted probabilities and help the model converge towards more accurate predictions.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...