This is my TryHackMe walkthrough, created to document my learning journey and share solutions with the community. The writeups include a mix of hints, step-by-step explanations, and final answers to help players who get stuck, while still encouraging independent problem-solving.

Google Dorking Room - Explaining how Search Engines work and leveraging them into finding hidden content!

Overview

Walkthrough

1. Ye Ol’ Search Engine

No hints needed!

2. Let’s Learn About Crawlers

  • Name the key term of what a "Crawler" is used to do. This is known as a collection of resources and their locations

=> Answer: Index

  • What is the name of the technique that "Search Engines" use to retrieve this information about websites?

=> Answer: Crawling

  • What is an example of the type of contents that could be gathered from a website?

=> Answer: Keywords

3. Enter: Search Engine Optimisation

No hints needed!

4. Beepboop - Robots.txt

  • Where would "robots.txt" be located on the domain "ablog.com"

=> Answer: ablog.com/robots.txt

  • If a website was to have a sitemap, where would that be located?

=> Answer: /sitemap.xml

  • How would we only allow “Bingbot” to index the website?

=> Answer: User-agent: Bingbot

  • How would we prevent a "Crawler" from indexing the directory "/dont-index-me/"?

=> Answer: Disallow: /dont-index-me/

  • What is the extension of a Unix/Linux system configuration file that we might want to hide from "Crawlers"?

=> Answer: .conf

5. Sitemaps

  • What is the typical file structure of a "Sitemap"?

=> Answer: XML

  • What real life example can "Sitemaps" be compared to?

=> Answer: Map

  • Name the keyword for the path taken for content on a website

=> Answer: route

6. What is Google Dorking?

  • What would be the format used to query the site bbc.co.uk about flood defences

=> Answer: site: bbc.co.uk flood defences

  • What term would you use to search by file type?

=> Answer: filetype:

  • What term can we use to look for login pages?

=> Answer: intitle: login