exam questions

Exam 70-768 All Questions

View all questions & answers for the 70-768 exam

Exam 70-768 topic 1 question 31 discussion

Actual exam question from Microsoft's 70-768
Question #: 31
Topic #: 1
[All 70-768 Questions]

DRAG DROP -

Case Study #1 -
This is a case study. Case studies are not limited separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other question on this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next sections of the exam. After you begin a new section, you cannot return to this section.

To start the case study -
To display the first question on this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Background -
Wide World Importers imports and sells clothing. The company has a multidimensional Microsoft SQL Server Analysis Services instance. The server has 80 gigabytes (GB) of available physical memory. The following installed services are running on the server:
✑ SQL Server Database Engine
✑ SQL Server Analysis Services (multidimensional)
The database engine instance has been configured for a hard cap of 50 GB, and it cannot be lowered. The instance contains the following cubes: SalesAnalysis,
OrderAnalysis.
Reports that are generated based on data from the OrderAnalysis cube take more time to complete when they are generated in the afternoon each day. You examine the server and observe that it is under significant memory pressure.
Processing for all cubes must occur automatically in increments. You create one job to process the cubes and another job to process the dimensions. You must configure a processing task for each job that optimizes performance. As the cubes grown in size, the overnight processing of the cubes often do not complete during the allowed maintenance time window.

SalesAnalysis -
The SalesAnalysis cube is currently being tested before being used in production. Users report that day name attribute values are sorted alphabetically. Day name attribute values must be sorted chronologically. Users report that they are unable to query the cube while any cube processing operations are in progress. You need to maximize data availability during cube processing and ensure that you process both dimensions and measures.

OrderAnalysis -
The OrderAnalysis cube is used for reporting and ad-hoc queries from Microsoft Excel. The data warehouse team adds a new table named Fact.Transaction to the cube. The Fact.Transaction table includes a column named Total Including Tax. You must add a new measure named Transactions Total Including Tax to the cube. The measure must be calculated as the sum of the Total Including Tax column across any selected relevant dimensions.

Finance -
The Finance cube is used to analyze General Ledger entries for the company.

Requirements -
You must minimize the time that it takes to process cubes while meeting the following requirements:
✑ The Sales cube requires overnight processing of dimensions, cubes, measure groups, and partitions.
✑ The OrderAnalysis cube requires overnight processing of dimensions only.
✑ The Finance cube requires overnight processing of dimensions only.
You need to resolve the issues that the users report.
Which processing options should you use? To answer, drag the appropriate processing option to the correct location or locations. Each processing option may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Select and Place:

Show Suggested Answer Hide Answer
Suggested Answer:
Box1: Process Full:
When Process Full is executed against an object that has already been processed, Analysis Services drops all data in the object, and then processes the object.
This kind of processing is required when a structural change has been made to an object, for example, when an attribute hierarchy is added, deleted, or renamed.

Box 2: Process Default -
Detects the process state of database objects, and performs processing necessary to deliver unprocessed or partially processed objects to a fully processed state.
If you change a data binding, Process Default will do a Process Full on the affected object.
Box 3:
Not Process Update: Forces a re-read of data and an update of dimension attributes. Flexible aggregations and indexes on related partitions will be dropped.
Incorrect Answers:
Not Process Clear: Drops the data in the object specified and any lower-level constituent objects. After the data is dropped, it is not reloaded.
Not Process Data: Processes data only without building aggregations or indexes. If there is data is in the partitions, it will be dropped before re-populating the partition with source data.
Not Process Index: Creates or rebuilds indexes and aggregations for all processed partitions. For unprocessed objects, this option generates an error.
Processing with this option is needed if you turn off Lazy Processing.
References:https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models/processing-options-and-settings-analysis-services

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
bilel_kaaniche
Highly Voted 4 years, 8 months ago
we cannot use process update on cube , so for least availibilty we should use process data
upvoted 8 times
...
Anette
Highly Voted 4 years, 6 months ago
I think this is the correct answer: Maximum data availability – Process Full Less than maximum data availability – Process default Least data availability – Process Data
upvoted 8 times
...
clement_
Most Recent 4 years, 2 months ago
Below is my own analysis which differ from all I see here. Process Clear will remove all the data, this is not something we usually want. Process Update is not applicable to cubes. Process Index would not allow to get the most recent data. Process Data will do a Process Full without the Aggregations and Index parts (query performance optimization), hence the less downtime, and the maximum data availability. Process Default will apply a "partial" Process Full depending on what is needed, hence depends on the situation but may be in between the Data and the Full. Process Full is the maximum Process achievable, hence the more downtime, and the least availability. My answer would then be: Max avail: Process Data Less than Max avail: Process Default Least avail: Process Full Do not hesitate to Reply.
upvoted 6 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago