Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.summary()
Does this meet the goal?
stilferx
Highly Voted 9 months, 1 week agob01d700
Most Recent 1 day, 22 hours agoslu239
1 month, 2 weeks ago2fe10ed
2 months agoPegooli
6 months, 2 weeks agogover07
5 months, 1 week ago6d1de25
7 months ago7d97b62
7 months, 1 week ago282b85d
8 months, 2 weeks agoXiltroX
11 months, 3 weeks agoSamuComqi
12 months agoMomoanwar
12 months ago