exam questions

Exam DP-203 All Questions

View all questions & answers for the DP-203 exam

Exam DP-203 topic 1 question 2 discussion

Actual exam question from Microsoft's DP-203
Question #: 2
Topic #: 1
[All DP-203 Questions]

You have an Azure Synapse workspace named MyWorkspace that contains an Apache Spark database named mytestdb.
You run the following command in an Azure Synapse Analytics Spark pool in MyWorkspace.
CREATE TABLE mytestdb.myParquetTable(
EmployeeID int,
EmployeeName string,
EmployeeStartDate date)

USING Parquet -
You then use Spark to insert a row into mytestdb.myParquetTable. The row contains the following data.

One minute later, you execute the following query from a serverless SQL pool in MyWorkspace.

SELECT EmployeeID -
FROM mytestdb.dbo.myParquetTable
WHERE EmployeeName = 'Alice';
What will be returned by the query?

  • A. 24
  • B. an error
  • C. a null value
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
gerrie1979
Highly Voted 2 years, 4 months ago
I did a test, waited for one minute and tried the query in a serverless sql pool and received 24 as the result, so I don't understand that B has been voted so much because the answer is A) 24 without a doubt
upvoted 63 times
cecbc1f
1 year ago
i test too and confirm that the right answer is A
upvoted 4 times
...
maximilianogarcia6
2 years, 4 months ago
Did you tried the same query that is presented here? with "mytestdb.dbo.myParquetTable"??
upvoted 4 times
yogiazaad
2 years, 1 month ago
The table and Column names are case insensitive.
upvoted 3 times
...
Virul
2 years, 1 month ago
I tried with all upper case, and it still return record for name Alice. Answer is A
upvoted 6 times
...
...
...
dmitriypo
Highly Voted 2 years, 4 months ago
Answer is B, but not because of the lowercase. The case has nothing to do with the error. If you look attentively, you will notice that we create table mytestdb.myParquetTable, but the select statement contains the reference to table mytestdb.dbo.myParquetTable (!!! - dbo). Here is the error message I got: Error: spark_catalog requires a single-part namespace, but got [mytestdb, dbo].
upvoted 56 times
psicktrick
2 years, 2 months ago
But if you look at the docs, that's exactly what has been done https://learn.microsoft.com/en-us/azure/synapse-analytics/metadata/table#expose-a-spark-table-in-sql:~:text=mytestdb.myparquettable%22)%3B-,Now%20you%20can%20read%20the%20data%20from%20your%20serverless%20SQL%20pool%20as%20follows%3A,-SQL
upvoted 20 times
Bedmed
6 months, 2 weeks ago
NO The Create table is in Azure Synapse Analytics Spark pool and select is in the serverless
upvoted 4 times
...
ck8.kakade
7 months, 3 weeks ago
Yes, so the right answer if A. It will return an output without any errors
upvoted 2 times
...
goldy29
1 year, 8 months ago
Thanks @psicktrick for the link
upvoted 4 times
...
...
devnginx
1 year, 3 months ago
i think B option is the correct too
upvoted 1 times
...
Shaik_Shahul
1 year, 5 months ago
i think you don't about sql server bro, Dbo means database object so it is not a issue for this the correct answer is A
upvoted 6 times
__Tom
11 months, 2 weeks ago
Dbo means database owner actually bro
upvoted 3 times
...
...
SenMia
1 year, 3 months ago
kindly clarify, which can be the right option? the conversations are confusing. :( any explanations are appreciated. thank you!!
upvoted 2 times
...
...
AMJB
Most Recent 4 days, 8 hours ago
Selected Answer: A
The clue is in the name apache cluster database. The database is mytestdb . Basically you can create a table with <db>.<tablename> . it will default to the dbo schema
upvoted 1 times
...
Lethahavm
5 days, 4 hours ago
Selected Answer: B
Since there's no configuration in the given scenario to link the Spark-managed Parquet table to the serverless SQL pool, an error will occur when attempting to execute the SQL query.
upvoted 1 times
...
Ciske92
1 month, 1 week ago
Selected Answer: A
I think A is the right answer. Firstly, dbo is added automatically (see examples in documentation). Furthermore, even though the created table in the spark pool will be saved with the name in lower case (as is stated in the documentation), SQL serverless pool in Synapse is case insensitive per default wehn it comes to table names and column names.
upvoted 1 times
...
Rayenwalid
1 month, 3 weeks ago
Selected Answer: A
I believe A is correct because the selection query occurs after the insertion operation.
upvoted 1 times
...
Romanx
2 months ago
Selected Answer: B
dbo should not be there.
upvoted 1 times
...
Asheesh1909
2 months, 2 weeks ago
Selected Answer: A
option A is correct tested it , there is no issue with the case , since the table and db names are saved in the lowercase , what ever case we use in the query azure / sql engine convert that into lower case before performing the query . 2. issue with .dbo the table is created in the spark pool , the table is store in the hive metadata . so without using the .dbo (schema layer ) the serverless sql pool , cannot access the table and gives error . since we are using mytestdb.dbo.myparquetTable --> the serverless poll serches for the table in both its schema layer and hive schema layer giving the output 24 . Also , once a table is created in spark pool , it can be accessed in serverless sql Pool without any issues; there is no need for an external table to be created .
upvoted 1 times
...
abhi_11kr1
3 months, 2 weeks ago
Selected Answer: B
The query will fail because the serverless SQL pool cannot directly access Spark-managed tables without additional configuration. Use an external table or expose the data via a view to make it accessible.
upvoted 1 times
hypersam
2 months, 1 week ago
you should at least test it out before posting wrong answers
upvoted 2 times
...
...
19a3424
3 months, 3 weeks ago
Selected Answer: A
The example here shows https://learn.microsoft.com/en-us/azure/synapse-analytics/metadata/table#create-an-external-table-in-spark-and-query-from-serverless-sql-pool
upvoted 1 times
...
EmnCours
3 months, 3 weeks ago
Selected Answer: A
Selected Answer: A
upvoted 1 times
...
Anithec0der
3 months, 3 weeks ago
Option A: dbo is the default schema where the object gets created in synapse if not specified explicitly.
upvoted 1 times
...
BrilliantBeast
4 months ago
Does the order of insertion doesn't matter here? As the column order is different in the table than the one used in insertion.
upvoted 1 times
...
examdemo
5 months, 2 weeks ago
24 is correct answer Link : https://learn.microsoft.com/en-us/azure/synapse-analytics/metadata/table#create-a-managed-table-in-spark-and-query-from-serverless-sql-pool
upvoted 1 times
...
esaade
5 months, 3 weeks ago
As per ChatGPT, The query will fail with an error message as the table myParquetTable is created using Spark and the USING Parquet option, which means it is stored in a Parquet file format. Serverless SQL pool does not support querying Parquet files directly, so it cannot query myParquetTable in its current form. To make the table accessible from the serverless SQL pool, you need to create an external table that references the Parquet file. Then, you can query the external table instead. Assuming you have created an external table named myExternalParquetTable that references the Parquet file containing the data in myParquetTable, the query to select EmployeeID where EmployeeName is 'Alice' would be: SELECT EmployeeID FROM myExternalParquetTable WHERE EmployeeName = 'Alice';
upvoted 6 times
...
milad2021
5 months, 3 weeks ago
Selected Answer: B
The query will fail with an error because mytestdb.myParquetTable is a Spark table, not a SQL table. When you created the table using Spark, you used the Spark SQL syntax, and the table is stored in the Spark engine's metadata. Serverless SQL pool in Azure Synapse Analytics cannot directly query Spark tables; it can only query SQL tables. If you want to query the data stored in the mytestdb.myParquetTable table using a serverless SQL pool, you need to create an external table that maps to the same Parquet file. You can do this by using the CREATE EXTERNAL TABLE statement in a SQL pool.
upvoted 3 times
AlejandroU
1 year, 5 months ago
or as "Managed table". The answer seems to be very similar to the section "Create a managed table in Spark and query from serverless SQL pool" in the link below: https://learn.microsoft.com/en-us/azure/synapse-analytics/metadata/table#expose-a-spark-table-in-sql
upvoted 2 times
...
...
Katiane
5 months, 3 weeks ago
Right answer is B. The query must run into Serverless SQL poll, not into Apache spark. "One minute later, you execute the following query from a serverless SQL pool in MyWorkspace." If we run that query into apache spark pool, using notebook for example, we must use "SELECT database.table". So, according the question, we must use serverless sql pool and, cause of that, we have to use "SELECT database.dbo.table" OR "use database; select table"
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago