Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam 312-50v12 All Questions

View all questions & answers for the 312-50v12 exam

Exam 312-50v12 topic 1 question 123 discussion

Actual exam question from ECCouncil's 312-50v12
Question #: 123
Topic #: 1
[All 312-50v12 Questions]

Which file is a rich target to discover the structure of a website during web-server footprinting?

  • A. domain.txt
  • B. Robots.txt
  • C. Document root
  • D. index.html
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
g_man_rap
6 months, 4 weeks ago
B. Robots.txt - This is a file used by web servers to communicate with web crawlers. The robots.txt file contains instructions on which parts of the server should not be accessed by the crawlers. It can provide a wealth of information about the structure of a website because it might list directories that are otherwise not linked or visible to a casual visitor. This can be accessed using a command like curl http://example.com/robots.txt. D. index.html - This file typically serves as the landing page or home page of a website. While it's important and can contain hyperlinks to other parts of the website, it usually doesn't reveal the full structure of the website, unlike robots.txt, which may reveal directories not linked from the homepage.
upvoted 2 times
...
insaniunt
11 months ago
Selected Answer: B
B. Robots.txt
upvoted 1 times
...
IPconfig
1 year ago
The robots.txt file contains the list of the web server directories and files that the web site owner wants to hide from web crawlers An attacker can simply request the Robots.txt file from the URL and retrieve sensitive information such as the root directory structure and content management system information about the target website An attacker can also download the Robots.txt file of a target website using the Wget tool
upvoted 2 times
...
Vincent_Lu
1 year, 4 months ago
Selected Answer: B
B. Robots.txt
upvoted 1 times
...
jeremy13
1 year, 6 months ago
Selected Answer: B
B. Robots.txt
upvoted 1 times
...
eli117
1 year, 7 months ago
Selected Answer: B
Robots.txt is a file that webmasters use to communicate with web crawlers and other automated agents visiting their site. This file is often used to exclude certain directories or pages from being crawled, but it can also contain valuable information about the site's directory structure and organization. By examining the robots.txt file, an attacker can gain insight into the site's organization and potentially identify hidden or sensitive directories. Domain.txt is not a standard file used in web server configuration or operation. Document root is the root directory of the web server, and index.html is the default home page file. While these files can provide information about the web server and its configuration, they do not necessarily reveal the structure of the website.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...