Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 79 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 79
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural network. What should you do?

  • A. Create a hot-encoding of words, and feed the encodings into your model.
  • B. Identify word embeddings from a pre-trained model, and use the embeddings in your model.
  • C. Sort the words by frequency of occurrence, and use the frequencies as the encodings in your model.
  • D. Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs in your model.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
M25
1 year, 6 months ago
Selected Answer: B
Went with B
upvoted 2 times
...
John_Pongthorn
1 year, 10 months ago
Selected Answer: B
B https://developers.google.com/machine-learning/guides/text-classification/step-3 https://developers.google.com/machine-learning/guides/text-classification/step-4 i
upvoted 2 times
...
ares81
1 year, 10 months ago
Selected Answer: B
Answer is B
upvoted 1 times
...
egdiaa
1 year, 11 months ago
Answer is B: According to Google Docs here: - https://developers.google.com/machine-learning/guides/text-classification/ it is a Word Embedding case
upvoted 4 times
...
hiromi
1 year, 11 months ago
Selected Answer: B
B (I'm not sure) - https://developers.google.com/machine-learning/guides/text-classification/step-3#label_vectorization - https://developers.google.com/machine-learning/guides/text-classification/step-4 - https://towardsai.net/p/deep-learning/text-classification-with-rnn - https://towardsdatascience.com/pre-trained-word-embedding-for-text-classification-end2end-approach-5fbf5cd8aead
upvoted 2 times
hiromi
1 year, 11 months ago
- https://developers.google.com/machine-learning/crash-course/embeddings/translating-to-a-lower-dimensional-space
upvoted 1 times
...
...
LearnSodas
1 year, 11 months ago
Selected Answer: C
Bag of words is a good practice to represent and feed text at a DNN https://machinelearningmastery.com/gentle-introduction-bag-words-model/
upvoted 1 times
503b759
1 week, 3 days ago
probably BOW suffers from the high cardinality of the text (100k words). embeddings are typically lower dimensional (hundreds not thousands of columns)
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...