Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Certified Generative AI Engineer Associate All Questions

View all questions & answers for the Certified Generative AI Engineer Associate exam

Exam Certified Generative AI Engineer Associate topic 1 question 41 discussion

Actual exam question from Databricks's Certified Generative AI Engineer Associate
Question #: 41
Topic #: 1
[All Certified Generative AI Engineer Associate Questions]

A company has a typical RAG-enabled, customer-facing chatbot on its website.

Select the correct sequence of components a user's questions will go through before the final output is returned. Use the diagram above for reference.

  • A. 1.embedding model, 2.vector search, 3.context-augmented prompt, 4.response-generating LLM
  • B. 1.context-augmented prompt, 2.vector search, 3.embedding model, 4.response-generating LLM
  • C. 1.response-generating LLM, 2.vector search, 3.context-augmented prompt, 4.embedding model
  • D. 1.response-generating LLM, 2.context-augmented prompt, 3.vector search, 4.embedding model
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
4af18fc
Highly Voted 1 month, 2 weeks ago
Selected Answer: A
We need to first vectorize the user question, these vectors will then be used to search the database for relevant documents which will create an augmented prompt that the response-generating LLM will use to provide an answer
upvoted 5 times
...
trendy01
Most Recent 4 weeks, 1 day ago
Selected Answer: A
A. 1. Embedding model → 2. Vector search → 3. Context augmentation prompt → 4. Response generation LLM
upvoted 1 times
...
HemaKG
1 month ago
In a typical RAG-enabled chatbot, the process usually follows these steps: Embedding Model: Converts the user’s question into a vector representation. Vector Search: Finds relevant information based on the vector representation. Context-Augmented Prompt: Combines the retrieved information with the original question to create a prompt. Response-Generating LLM: Generates the final response based on the context-augmented prompt. So, the correct sequence is: A. 1.embedding model, 2.vector search, 3.context-augmented prompt, 4.response-generating LLM Option C suggests starting with the response-generating LLM, which doesn’t align with the typical RAG workflow. The LLM needs the context-augmented prompt to generate a relevant response, which is why it comes last in the sequence.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...