exam questions

Exam CISSP All Questions

View all questions & answers for the CISSP exam

Exam CISSP topic 1 question 10 discussion

Actual exam question from ISC's CISSP
Question #: 10
Topic #: 1
[All CISSP Questions]

An organization has been collecting a large amount of redundant and unusable data and filling up the storage area network (SAN). Management has requested the identification of a solution that will address ongoing storage problems. Which is the BEST technical solution?

  • A. Compression
  • B. Caching
  • C. Replication
  • D. Deduplication
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tanzy360
Highly Voted 2 years, 5 months ago
Selected Answer: D
D is the only answer choice that makes sense with the excess data
upvoted 9 times
...
franbarpro
Highly Voted 2 years, 5 months ago
Selected Answer: D
"D" it is. Data deduplication is a process that eliminates excessive copies of data and significantly decreases storage capacity requirements. Deduplication can be run as an inline process as the data is being written into the storage system and/or as a background process to eliminate duplicates after the data is written to disk. https://www.netapp.com/data-management/what-is-data-deduplication/#:~:text=Data%20deduplication%20is%20a%20process,data%20is%20written%20to%20disk.
upvoted 7 times
...
Skynet08
Most Recent 1 month, 2 weeks ago
Selected Answer: D
the question mentions "redundant" which indicates the answer will be D
upvoted 1 times
...
Rider2053
2 months, 2 weeks ago
Selected Answer: D
The data deduplication process systematically eliminates redundant copies of data and files, which can help reduce storage costs and improve version control. In an era when every device generates data and entire organizations share files, data deduplication is a vital part of IT operations.
upvoted 1 times
...
Moose01
2 months, 3 weeks ago
Selected Answer: D
I need to slow down and read it. it is De-duplication not Duplication, Jesus what a trap.
upvoted 4 times
...
Eltooth
5 months ago
Selected Answer: D
D is correct answer. Redundant can mean multiple (think redundant systems) so if you have multiple versions of the data then dedup would reduce these copies to one main and multiple stubs. Yes there would be a hit on CPU performance once dedup is run for the first time, however long term this speeds up space saving when new (redundant) data is added. Compression would reference each redundant bit/byte and have pointers to each, filling up the master index record and adding processing overhead each time data was added, searched for or retrieved.
upvoted 2 times
...
Ezebuike
6 months, 1 week ago
Assuming you have a very large file on your desktop and is occupying much storage space, you can zip up the folder and the size of the file will reduce. What dose that mean? you are compressing the file. That same logic can be applied to this quest. Thus, the correct and is A. Compression
upvoted 2 times
...
3NO5
9 months, 3 weeks ago
D is the best answer Deduplication is the best solution for managing excess data, even if it's not just duplicates. It helps remove redundant and unneeded data efficiently.
upvoted 1 times
...
dm808
11 months ago
Selected Answer: A
Deduplication doesnt address unusable data.. so it has to be compression, A
upvoted 1 times
dm808
11 months ago
and "redundant" can also mean "unnecessary" as well as "duplicate"
upvoted 2 times
...
...
Kyanka
11 months, 3 weeks ago
Selected Answer: D
D is pretty much the "text book" answer for this question.
upvoted 1 times
...
andyprior
1 year ago
Selected Answer: A
Deduplication is effective in organizations that have a lot of redundant data, such as backup systems that have several versions of the same file. Compression is effective in decreasing the size of unique files, such as images, videos, and databases
upvoted 1 times
...
andyprior
1 year ago
Deduplication is effective in organizations that have a lot of redundant data, such as backup systems that have several versions of the same file. Compression is effective in decreasing the size of unique files, such as images, videos, and databases
upvoted 2 times
...
DragonHunter40
1 year ago
I say the answer is A. The question isn't talking about getting rid of the data, and 9 times out of 10, no one is going to go through large amounts of data to see what's a duplicate. Not to mention, you wouldn't know what to keep or delete. A "Compression" is the simplest answer.
upvoted 1 times
...
Bright07
1 year ago
D is the answer. Although both A and D answer look similar. This is simple explanation for both answers. A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. deduplication commonly occurs at the block level; however, compression generally occurs at the file level. Now the difference is that deduplication occurs at the block level according to the question while compression occurs at the file level. so answer is Deduplication.
upvoted 3 times
...
pete79
1 year ago
D, as question states large amount of redundant data, hence deduplication.
upvoted 1 times
...
SBD600
1 year ago
Selected Answer: D
D is right
upvoted 1 times
...
YesPlease
1 year, 2 months ago
Answer A) Compression will address ALL data stored on SAN. I thought D at first too...but we don't know how much of the data is actually duplicated....it may only be 1% of all of the data and won't make a difference compared to compression of ALL of the data.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago