Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam CISSP All Questions

View all questions & answers for the CISSP exam

Exam CISSP topic 1 question 10 discussion

Actual exam question from ISC's CISSP
Question #: 10
Topic #: 1
[All CISSP Questions]

An organization has been collecting a large amount of redundant and unusable data and filling up the storage area network (SAN). Management has requested the identification of a solution that will address ongoing storage problems. Which is the BEST technical solution?

  • A. Compression
  • B. Caching
  • C. Replication
  • D. Deduplication
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Tanzy360
Highly Voted 2 years, 2 months ago
Selected Answer: D
D is the only answer choice that makes sense with the excess data
upvoted 8 times
...
franbarpro
Highly Voted 2 years, 2 months ago
Selected Answer: D
"D" it is. Data deduplication is a process that eliminates excessive copies of data and significantly decreases storage capacity requirements. Deduplication can be run as an inline process as the data is being written into the storage system and/or as a background process to eliminate duplicates after the data is written to disk. https://www.netapp.com/data-management/what-is-data-deduplication/#:~:text=Data%20deduplication%20is%20a%20process,data%20is%20written%20to%20disk.
upvoted 6 times
...
Eltooth
Most Recent 2 months ago
Selected Answer: D
D is correct answer. Redundant can mean multiple (think redundant systems) so if you have multiple versions of the data then dedup would reduce these copies to one main and multiple stubs. Yes there would be a hit on CPU performance once dedup is run for the first time, however long term this speeds up space saving when new (redundant) data is added. Compression would reference each redundant bit/byte and have pointers to each, filling up the master index record and adding processing overhead each time data was added, searched for or retrieved.
upvoted 1 times
...
Ezebuike
3 months, 1 week ago
Assuming you have a very large file on your desktop and is occupying much storage space, you can zip up the folder and the size of the file will reduce. What dose that mean? you are compressing the file. That same logic can be applied to this quest. Thus, the correct and is A. Compression
upvoted 2 times
...
3NO5
6 months, 3 weeks ago
D is the best answer Deduplication is the best solution for managing excess data, even if it's not just duplicates. It helps remove redundant and unneeded data efficiently.
upvoted 1 times
...
dm808
8 months ago
Selected Answer: A
Deduplication doesnt address unusable data.. so it has to be compression, A
upvoted 1 times
dm808
7 months, 4 weeks ago
and "redundant" can also mean "unnecessary" as well as "duplicate"
upvoted 1 times
...
...
Kyanka
8 months, 3 weeks ago
Selected Answer: D
D is pretty much the "text book" answer for this question.
upvoted 1 times
...
andyprior
9 months ago
Selected Answer: A
Deduplication is effective in organizations that have a lot of redundant data, such as backup systems that have several versions of the same file. Compression is effective in decreasing the size of unique files, such as images, videos, and databases
upvoted 1 times
...
andyprior
9 months ago
Deduplication is effective in organizations that have a lot of redundant data, such as backup systems that have several versions of the same file. Compression is effective in decreasing the size of unique files, such as images, videos, and databases
upvoted 1 times
...
DragonHunter40
9 months, 2 weeks ago
I say the answer is A. The question isn't talking about getting rid of the data, and 9 times out of 10, no one is going to go through large amounts of data to see what's a duplicate. Not to mention, you wouldn't know what to keep or delete. A "Compression" is the simplest answer.
upvoted 1 times
...
Bright07
9 months, 2 weeks ago
D is the answer. Although both A and D answer look similar. This is simple explanation for both answers. A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. deduplication commonly occurs at the block level; however, compression generally occurs at the file level. Now the difference is that deduplication occurs at the block level according to the question while compression occurs at the file level. so answer is Deduplication.
upvoted 1 times
...
pete79
9 months, 3 weeks ago
D, as question states large amount of redundant data, hence deduplication.
upvoted 1 times
...
SBD600
10 months ago
Selected Answer: D
D is right
upvoted 1 times
...
YesPlease
11 months, 3 weeks ago
Answer A) Compression will address ALL data stored on SAN. I thought D at first too...but we don't know how much of the data is actually duplicated....it may only be 1% of all of the data and won't make a difference compared to compression of ALL of the data.
upvoted 2 times
...
aape1
1 year, 1 month ago
Selected Answer: A
A, the reason is that deduplication is NOT secure. "In some instances, a SAN may implement deduplication in order to save space by not retaining multiple copies of the same file. However, this can sometimes result in data loss if the one retained original is corrupted." - (ISC)2 CISSP Certified Information Systems Security Professional Official Study Guide, 9th Edition, Chapter 11 - Converged Protocols - Storage Area Network (SAN)
upvoted 3 times
...
Sledge_Hammer
1 year, 2 months ago
D is the answer. It's Deduplication. Deduplication refers to a method of eliminating a dataset's redundant data. In a secure data deduplication process, a deduplication assessment tool identifies extra copies of data and deletes them, so a single instance can then be stored. Data deduplication software analyzes data to identify duplicate byte patterns.
upvoted 1 times
...
jens23
1 year, 5 months ago
Selected Answer: D
Compression (option A) reduces the size of data by encoding it in a more compact form, but it may not effectively address the issue of redundant data. It's D
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...