exam questions

Exam DP-203 All Questions

View all questions & answers for the DP-203 exam

Exam DP-203 topic 2 question 36 discussion

Actual exam question from Microsoft's DP-203
Question #: 36
Topic #: 2
[All DP-203 Questions]

You have an Azure Databricks workspace named workspace1 in the Standard pricing tier.
You need to configure workspace1 to support autoscaling all-purpose clusters. The solution must meet the following requirements:
✑ Automatically scale down workers when the cluster is underutilized for three minutes.
✑ Minimize the time it takes to scale to the maximum number of workers.
✑ Minimize costs.
What should you do first?

  • A. Enable container services for workspace1.
  • B. Upgrade workspace1 to the Premium pricing tier.
  • C. Set Cluster Mode to High Concurrency.
  • D. Create a cluster policy in workspace1.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ROBERSONWM
Highly Voted 3 years, 1 month ago
B is the correct answer. Automated (job) clusters always use optimized autoscaling. The type of autoscaling performed on all-purpose clusters depends on the workspace configuration. Standard autoscaling is used by all-purpose clusters in workspaces in the Standard pricing tier. Optimized autoscaling is used by all-purpose clusters in the Azure Databricks Premium Plan. https://docs.databricks.com/clusters/cluster-config-best-practices.html
upvoted 19 times
...
lukeonline
Highly Voted 2 years, 9 months ago
Selected Answer: B
We definitely need "Optimized Autoscaling" (not Standard Autoscaling) which is only part of Premium Plan. Reason: We need to scale down after 3 min underutilization and Standard Autoscaling only allows scaling down after at least 10 minutes. Standard autoscaling: "Scales down only when the cluster is completely idle and it has been underutilized for the last 10 minutes." https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
upvoted 7 times
...
Pey1nkh
Most Recent 2 months ago
Selected Answer: B
naughty Microsoft :)! the right action is Cluster Policy but it require the Premium tier. If you're on the Standard tier, you cannot implement this solution until you upgrade to Premium. so the answer is B. Upgrade workspace1 to the Premium pricing tier.
upvoted 1 times
...
moneytime
8 months, 2 weeks ago
B and C are both required to do the job .But .the FIRST thing to do before creating the AUTO-SCALING CLUSTER POLICY s to migrate or upgrade to premium tier where AUTO-SCALING is supported or enabled.This explains why B is the valid answer . In conclusion C is part of the process ,however not the first thing to do
upvoted 2 times
...
kkk5566
1 year, 1 month ago
Selected Answer: B
B is correct
upvoted 1 times
...
akhil5432
1 year, 2 months ago
Selected Answer: B
Option B
upvoted 1 times
...
auwia
1 year, 4 months ago
Selected Answer: D
I've finally found a valid answer: https://learn.microsoft.com/en-us/azure/databricks/administration-guide/clusters/policies
upvoted 2 times
auwia
1 year, 3 months ago
False, Cluster policies require the Premium plan. :) So B is the correct answer.
upvoted 3 times
...
...
Rossana
1 year, 6 months ago
B doesn't minimize the costs. To support autoscaling all-purpose clusters in Azure Databricks, you need to create a cluster policy that specifies the auto-scaling settings. The cluster policy allows you to specify when to add or remove workers based on the workload on the cluster. For this scenario, the cluster policy should be configured to automatically scale down workers when the cluster is underutilized for three minutes. This will help to minimize costs by reducing the number of idle workers. The policy should also be configured to scale to the maximum number of workers quickly to minimize the time it takes to process workloads. Enabling container services for workspace1 (option A) is not necessary for autoscaling all-purpose clusters. Upgrading workspace1 to the Premium pricing tier (option B) may not be necessary and may not be cost-effective depending on your specific requirements. Setting Cluster Mode to High Concurrency (option C) is not related to autoscaling all-purpose clusters.
upvoted 5 times
JG1984
1 year, 4 months ago
Cluster policies are available only in the Premium pricing tier of Azure Data bricks, and not in the Standard pricing tier.
upvoted 3 times
...
...
Deeksha1234
2 years, 2 months ago
B is correct
upvoted 1 times
...
Aurelkb
2 years, 5 months ago
Selected Answer: B
correct
upvoted 2 times
...
Egocentric
2 years, 6 months ago
B is the correct answer
upvoted 1 times
...
Jaws1990
2 years, 9 months ago
Not sure if this is a valid question anymore. This link shows that the standard pricing tier supports optimised autoscaling. https://databricks.com/product/azure-pricing
upvoted 6 times
allagowf
2 years ago
the autoscaling is under the premuim plan not the standrd one and this is clear in the link you shared.
upvoted 1 times
Igor85
1 year, 11 months ago
no difference anymore between Standard and Premium, indeed
upvoted 1 times
lcss27
6 months, 2 weeks ago
https://www.databricks.com/product/pricing/platform-addons Cluster Policy obliga available on Premium
upvoted 1 times
...
...
cosarac
1 year, 11 months ago
as Jaws1990 says it is available on both on the link. I has green for both types
upvoted 1 times
...
...
...
trietnv
2 years, 10 months ago
Selected Answer: B
They need to use Optimized autoscaling for adapting requirements. - Optimized autoscaling is used by all-purpose clusters in the Azure Databricks Premium Plan. - On job clusters, scales down if the cluster is underutilized over the last 40 seconds. - On all-purpose clusters, scales down if the cluster is underutilized over the last 150 seconds. reference: https://docs.microsoft.com/en-us/azure/databricks/clusters/configure
upvoted 3 times
...
Canary_2021
2 years, 10 months ago
1. Both standard and premium pricing tire support Autopilot Cluster. Autopilot support Autoscaling and Terminate after X minutes of inactivity. 2. 'Cluster Policies' is only supported by premium pricing tire. Control cost by limiting per cluster maximum cost. 3. standard pricing tire is cheaper than premium pricing tire. Base on these 3 items, I don't figure out why it has to upgrade to Premium pricing tier.
upvoted 1 times
Canary_2021
2 years, 10 months ago
A .Enable Databricks Container Service only when you need to use customer containers, so it is not a correct answer. I vote C to be the correct Answer.
upvoted 1 times
...
...
Larrave
2 years, 10 months ago
Answer B is correct. One has to check on the documentation. There are two autoscaling solutions: standard autoscaling (Standard Tier) and optimized autoscaling (Premimum Tier). Since there is a requirement of downscaling after three minutes of underutilization, only optimized autoscaling can offer such a solution. https://docs.microsoft.com/en-us/azure/databricks/clusters/configure#optimized-autoscaling On all-purpose clusters, scales down if the cluster is underutilized over the last 150 seconds.
upvoted 6 times
...
certstowinirl
2 years, 11 months ago
Why is the answer not D? Autoscaling is available in the Standard pricing tier. Since "costs" is also a factor in this question, why upgrade to premium?
upvoted 5 times
...
brendy
3 years, 2 months ago
Is this correct?
upvoted 2 times
Sudheer_K
3 years, 1 month ago
Not sure, what about the cost factor and premium doesn’t minimise cost.
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago