WPstatsReport 052013 PDF
WPstatsReport 052013 PDF
STATISTICS REPORT
MAY 2013
Website security is an ever-moving target. New website launches are common, new code is released
constantly, new Web technologies are created and adopted every day; as a result, new attack techniques are
frequently disclosed that can put every online business at risk. In order to stay protected, enterprises must
receive timely information about how they can most efficiently defend their websites, gain visibility into the
performance of their security programs, and learn how they compare with their industry peers. Obtaining
these insights is crucial in order to stay ahead and truly improve enterprise website security.
To help, WhiteHat Security has been publishing its Website Security Statistics Report since 2006. This report
is the only one that focuses exclusively on unknown vulnerabilities in custom Web applications, code that
is unique to an organization, and found in real-world websites. The underlying data is hundreds of terabytes
in size, comprises vulnerability assessment results from tens of thousands of websites across hundreds of the
most well-known organizations, and collectively represents the largest and most accurate picture of website
security available. Inside this report is information about the most prevalent vulnerabilities, how many get
fixed, how long the fixes can take on average, and how every application security program may measurably
improve. The report is organized by industry, and is accompanied by WhiteHat Security’s expert analysis and
recommendations.
Through its Software-as-a-Service (SaaS) offering, WhiteHat Sentinel, WhiteHat Security is uniquely positioned
to deliver the depth of knowledge that organizations require to protect their brands, attain compliance, and
avert costly breaches.
N
Whether you read the Verizon Data Breach Incidents Report, the Trustwave Global Security
Report, the Symantec Internet Security Threat Report, or essentially all other reports throughout
the industry, the story is the same -- websites and web applications are one of, if not the
leading target of cyber-attack. This has been the case for years. Website breaches lead directly
propagation, and loss of customers. Given modern society’s ever-increasing reliance on the web,
the impact of a breach and the associated costs are going up, and fast. While an organization
may ultimately survive a cyber-crime incident, the business disruption is often severe. It is far
preferable to do something now to avert and minimize harm before disaster strikes.
is not the answer. These controls provide nearly zero protection against today’s web-based
attacks. So while protecting the network and the host layers is still important, forward-thinking
professionals are now seeing the bigger picture of computer security for what it really is: a
software security problem.
Understanding this subtle distinction is key. Organizations must demand that software be
designed in a way that makes it resilient against attack and does not require additional security
products to protect it. The question that organizations should be asking themselves is: how
do we integrate security throughout the software development life-cycle (SDLC)? How do we
procure this type of software?
As simple as these questions sound, the answers have proven elusive. Most responses by the so-
called experts are based purely on personal anecdote and devoid of any statistically compelling
evidence, such as the data presented in this report. Many of these experts will cite various “best-
practices,” such as software security training for developers, security testing during QA, static
code analysis, centralized controls, Web Application Firewalls, penetration-testing, and more;
however, the term “best-practices” implies the activity is valuable in every organization at all
times. The reality, though, is that just because a certain practice works well for one organization
does not mean it will work at another. Unfortunately, this hasn’t prevented many from
The net result: websites no less hackable today than they were yesterday.
Organizations need to better understand how various parts of the SDLC affect the introduction of
vulnerabilities, which leave the door open to breaches. For example, we would like to
statement today because the supporting data does not exist -- at least, not yet. If we had these
insights, supported by empirical evidence, it would be nothing less than a game changer.
program. Questions such as: how often do you perform security tests on your code during QA?
vulnerability outcomes and breaches is far more complicated than we ever imagined.
respects overall website security continues to show steady signs of improvement despite the
stream of news headlines.
• 53% of organizations said their software projects contain an application library or framework
that centralizes and enforces security controls. These organizations experienced 64% more
vulnerabilities, resolved them 27% slower, but demonstrated a 9% higher remediation rate.
• 39% of organizations said they perform some amount of Static Code Analysis on their
website(s) underlying applications. These organizations experienced 15% more vulnerabilities,
resolved them 26% slower, and had a 4% lower remediation rate.
• 55% of organizations said they have a Web Application Firewall (WAF) in some state of
deployment. These organizations experienced 11% more vulnerabilities, resolved them 8%
slower, and had a 7% lower remediation rate.
•23% of organizations website(s) said they experienced a data or system breach as a result of
an application layer vulnerability. These organizations experienced 51% fewer vulnerabilities,
resolved them 18% faster, and had a 4% higher remediation rate.
Much of the data above seems reasonable, even logical, while other bits seem completely
counterintuitive. For instance, organizations that do perform Static Code Analysis or have a Web
Application Firewall appear to have notably worse performance metrics than those who did
neither.
One explanation may be that these metrics are precisely WHY these organizations [recently]
have fewer vulnerabilities). This remains to be seen. It could also be that they are misusing or
engaged. What we know for sure is there are customers for whom these solutions absolutely
make a measurable positive impact -- we see it in the data -- while others receive no discernible
that there are in fact few, if any, truly universal application best-practices.
to do so, it could be because they don’t understand the issues well enough. If this is the case,
this is a good indication that providing training is a good idea. It could also easily be that a long
We were also curious about business drivers and the impact of compliance on website security.
By a slim margin, organizations said their #1 driver for resolving vulnerabilities was compliance,
narrowly ahead of risk reduction. At the same time, when we asked the same organizations to
rank the reasons why their vulnerabilities go unresolved, compliance was cited as the #1 reason.
nice concept in casual conversation, this is not the enterprise reality we see. Keep in mind that
WhiteHat Sentinel reports are often used to satisfy a plethora of auditors, but WhiteHat Security
is not a PCI-DSS ASV nor a QSA vendor. When organizations are required to allocate funds
toward compliance, which may or may not enhance security, there are often no resources left or
tolerance by the business to do anything more effective.
Finally, we also wanted to know what part(s) of the organization are held accountable in the
event of a website(s) data or system breach: we found that 79% said the Security Department
would be accountable. Additionally, 74% said Executive Management, 66% Software
Development, and 22% Board of Directors. By analyzing the data in this report, we see
evidence of a direct correlation between increased accountability and decreased breaches, and
For example, if developers are required by compliance to attend security training, they’ll view
it as a checkbox activity and not internalize much of anything they’ve learned in training.
However, if the organization places accountability on developers should a breach occur, all of
a sudden training effectiveness increases, because now there is an obvious incentive to learn.
When you empower those who are also accountable, whatever best-practices are then put into
place have a higher likelihood of being effective. For security to really improve, some part of the
organization must be held accountable. This is our working theory, the underlying narrative of
our report.
Here is the biggest lesson and the common theme we’re seeing: software security has not yet
percolated into the collective consciousness as something organizations actually need to do
something about proactively. While much lip service may be paid, we must address the issue
that application security professionals are essentially selling preventative medicine, while much
of the buying population still behaves with a wait-for-a-visit-to-the-emergency-room attitude
before kicking into gear. This is a dangerous policy and in stark contrast to their pious rhetoric,
which attempts to obfuscate that reality.
High-Level
Vulnerability Classes
Industry
• The industries that remediated the largest percentage of their serious* vulnerabilities on
average were Entertainment & Media (81%), Telecommunications (74%), and Energy (71%)
• The industries that remediated the fewest percentage of their serious* vulnerabilities on
• 85% of organizations said they perform some amount of application security testing in pre-
deployment
• Organizations said their #1 driver for resolving vulnerabilities was “Compliance,” narrowly
• In the event an organization experiences a website(s) data or system breach, 79% said the
• Organizations that provided instructor led or computer-based software security training for
their programmers had 40% fewer vulnerabilities, resolved them 59% faster, but exhibited a 12%
lower remediation rate.
• Organizations that performed Static Code Analysis on their website(s) underlying applications
had 15% more vulnerabilities, resolved them 26% slower, and had a 4% lower remediation rate.
• Organizations with a Web Application Firewall deployment had 11% more vulnerabilities,
resolved them 8% slower, and had a 7% lower remediation rate.
AT A GLANCE:
THE CURRENT STATE OF WEBSITE SECURITY
1000
800
600
400
200
vulnerability reduction trend is welcome news, there are several possible explanations that must
remind readers that this report illustrates a best-case scenario: websites are, at a minimum,
believe otherwise, websites generally may in fact be getting more “secure” — that is to say,
attestation of security readiness before business relationships move forward, things tend to
automated attacks rather than sentient adversaries because of their value to the business and/
truth to this, we have seen reports released by our peers and their numbers are not far off from
While overall vulnerability reduction is no doubt positive, the sheer number of serious*
vulnerabilities in the wild is quite stunning. Consider for a moment that there are 1.8 million
private. At 56 vulnerabilities per website, we can estimate that there are over 100 million serious
vulnerabilities undiscovered on the Web.
Vulnerability counts alone do not provide a clear picture of the current state of website
80% or greater – in some industries even at 100% – but that these numbers have been largely
unchanged over the years. This suggests that building web applications is inherently a task
that lends itself to vulnerabilities. We are not inclined to simply blame developers – that’s too
easy, and unfair – as applications increasingly become more complex and attack surface of
applications grows with each newly added feature.
it proves that we are recording and socializing – both upwards and downwards – the wrong
metrics to effect change. Fortunately some industries are doing quite well, but overall, a tracked
disappearing, this statistic may require correlation with another data point to be instructive.
Perhaps that’s the declining number of serious vulnerabilities.
The most common question we receive regarding the Average Time-to-Fix (Days) and Average
Remediation Rate statistics is: why does it take so long for organizations to remediate
vulnerabilities they’ve been informed of? To the uninitiated, the choice would seem to be a no-
brainer. The fact is, again, these vulnerabilities are not resolved with a simple vendor supplied
patch. This is almost always customer code we’re dealing with and, as such, it requires the
creation of a custom patch. So while many contributing factors may play into the eventual
security is a trade-off.
will for certain cost the organization money? Or resolve a vulnerability that might be exploited
and might cost the company money? The challenge is this decision must be made every day
with incomplete information. There is no guarantee that any single vulnerability will get
exploited today, tomorrow, or ever – and if so, what the costs will be to the organization.
under resolved:
deprecated websites under the Sentinel Service still in active use for over two years.
Later in the “Survey: Application Security In The SDLC” section of this report, we asked
customers to rank the prevalence of these issue. The results were highly illuminating.
This is a good opportunity to point out that while an individual programmer can be in control
of what net-new vulnerabilities they produce with each release, they often don’t have much
by development managers.
that improve true risk-decision making capabilities. With better data we can help development
accurately decide which issues can wait till later and be placed under the watchful eye of the
operational security team.
such as centralized security controls, or temporarily mitigate issues that will land in production
systems no matter what is done in the SDLC. A virtual patch using a Web Application Firewall
number of vulnerabilities so that these tough choices are faced far less often than they are today.
Websites are an ongoing business concern and security must be ensured all the time, not just at
a single vulnerability, on any given day, to win. That’s why the true Key Performance Indicator
these metrics, or a combination thereof, may be the area that has the greatest impact on a given
organization’s Window-of-Exposure outcome. To provide context, let’s consider two identical
websites, SiteA and SiteB.
1)
had at least one of those issues publicly exposed.
2)
had at least one of those issues publicly exposed.
why a particular development group in a single organization seemingly performs better than
the rest. All told, the data shows that the industry in which a particular website falls seems
to, at best, only slightly correlate to an expected security posture. Previous reports have also
explored potential correlations that may exist between organization size and development
framework/programming language in use, but only small performance variations emerge.
Now that we have an understanding of the average total number of serious* vulnerabilities,
Time-to-Fix, Remediation Rates, and Window of Exposure across industry verticals, we’ll
look at the distribution of vulnerability classes. In Figure 4, the most prevalent vulnerabilities
classes are calculated based upon their percentage likelihood of at least one instance being
found within any given website. This approach minimizes data skewing in websites that are
either highly secure or extremely risk-prone.
future.
(1 & 2)
websites respectively. Last year, coincidentally, those percentages were exactly opposite. While
seems to have stalled. A potential reason for this is the sheer number of them in the average
Note: For those unfamiliar, Information Leakage is largely a catch-all term used to describe
a vulnerability where a website reveals sensitive data, such as technical details of the Web
for a typical visitor, but may be used by an attacker to exploit the system, its hosting network,
or users. Common examples are a failure to scrub out HTML/ JavaScript comments containing
JavaScript). This vulnerability class is most often used to force a website to display content
Brute Force moved up to fourth from sixth place in the last year and increased 10 percentage
combination is incorrect. Due to spammers mining for valid email addresses, which double as
usernames on a variety of websites, enterprises have an increased awareness and appreciation
for the problem. In these cases we adjust the severity of Brute Force vulnerability accordingly.
asserting that nearly every website has at least one vulnerability, particularly those with login
footprint their target’s web presence and enumerate as much information as possible. Pieces of
information may include the target’s platform distribution and version, web application software
Note: While the presence of Fingerprinting as a class is new, we must point out that these
placing them under the label of Information Leakage. In 2012 a change was made to break them
7)
of websites in 2012. This class allows communication to be exposed to untrusted third parties,
8) Session Fixation moved from #9 in 2011 to #8 in 2012 and increased by four percentage
valid after the user successfully authenticates. Fortunately, because of the nature of this issue, it
9)
websites and a vulnerability once largely under-appreciated, has proved itself to be an effective
from one page to another on the same website, or sent off to a completely different website.
Note: It is important to understand that websites did not suddenly become vulnerable to this
label, so we expect these numbers to climb over time because the issue is indeed pervasive.
10)
experienced a large decline during 2012, and now resides in tenth place on the list and is
authenticated or non-authenticated user can access data or functionality that their privilege level
should not allow. For example, User A can surreptitiously obtain data from User B’s account,
or perform actions reserved only for Admins, perhaps simply by changing a single number in a
URL.
in, typically these vulnerabilities are not overly voluminous; they are simple oversights and
vulnerabilities are indicative of a need for a huge platform upgrade, often put off for as long
at least one of them. The most probable explanation of the drop is the separating out of URL
Redirector Abuse vulnerabilities from this class.
Another very extremely notable change from 2011 to 2012 is that SQL Injection no longer
to compromise websites and steal the data they possess. This is yet another proof point that
vulner
2011 and 2010. While progress is being made, wiping out particular vulnerability classes
prevalence by class in the overall vulnerability population. Notice how greatly it differs from the
11%
13%
Cross-Site Scripting
Information Leakage
7% Content Spoofing
Cross-Site Request Forgery
Brute Force
Insufficient Transport Layer Protection
12% Insufficient Authorization
SQL injection
Other
the organization’s current security posture compares to their peers or competitors. They want
impossible, and the attempt is prohibitively expensive and for many completely unnecessary
If an organization is a target of opportunity, a goal of being just above average with respect to
and therefore easier to breach, targets. On the other hand, if an organization is a target of
efforts are detectable, preventable, and in case of a compromise, survivable. This is due to the
exploit.
THE CURRENT
81 11 54 107
VULNERABILITY PER SITE PER YEAR THAT HAVE BEEN FIXED
STATE OF % % DAYS
WEBSITE SECURITY
*Serious vulnerabilities are defined as those in which an attacker could take control over all, or a part, of a website, compromise user accounts, access sensitive data or violate compliance requirements.
MOST COMMON
VULNERABILITIES
30%
CLASSES Cross-Site
Scripting*
Information
Leakage*
Content
Spoofing*
Brute Force* Fingerprinting* Cross-Site Insufficient
Request Forgery* Authorization*
EXPOSURE AND CURRENT DEFENSE CURRENT APPLICATION SECURITY BEHAVIORS AND CONTROLS
USED BY ORGANIZATIONS
DAYS OVER A YEAR THAT A SITE IS EXPOSED TO SERIOUS* VULNERABILITIES 100%
80%
60%
40%
24% 33% 9% 11% 24%
20% 57% 29% 57% 29% 71%
Programmers receive instructor led or computer-based software security training
Applications contain a library or framework that centralizes and enforces security controls
24% Always Vulnerable Perform Static Code Analysis on their website(s) underlying applications
33% Frequently Vulnerable 271-364 days a year
9% Regularly Vulnerable 151-270 days a year Web Application Firewall Deployed
11% Occasionally Vulnerable 31-150 days a year Transactional / Anti-Fraud Monitoring System Deployed
Rarely Vulnerable 30 days or less a year
THE CURRENT
81 50 67 226
VULNERABILITY PER SITE PER YEAR THAT HAVE BEEN FIXED
STATE OF % % DAYS
WEBSITE SECURITY
*Serious vulnerabilities are defined as those in which an attacker could take control over all, or a part, of a website, compromise user accounts, access sensitive data or violate compliance requirements.
MOST COMMON
VULNERABILITIES
30%
CLASSES Information
Leakage*
Cross-Site
Scripting*
Content
Spoofing*
Cross-Site
request Forgery*
Brute Force* Directory
Indexing*
SQL injection*
EXPOSURE AND CURRENT DEFENSE CURRENT APPLICATION SECURITY BEHAVIORS AND CONTROLS
USED BY ORGANIZATIONS
DAYS OVER A YEAR THAT A SITE IS EXPOSED TO SERIOUS* VULNERABILITIES 100%
80%
60%
40%
28% 28% 10% 10% 23%
20% 64% 50% 50% 70% 40%
Programmers receive instructor led or computer-based software security training
Applications contain a library or framework that centralizes and enforces security controls
28% Always Vulnerable Perform Static Code Analysis on their website(s) underlying applications
38% Frequently Vulnerable 271-364 days a year
10% Regularly Vulnerable 151-270 days a year Web Application Firewall Deployed
10% Occasionally Vulnerable 31-150 days a year Transactional / Anti-Fraud Monitoring System Deployed
23% Rarely Vulnerable 30 days or less a year
THE CURRENT
90 22 53 276
VULNERABILITY PER SITE PER YEAR THAT HAVE BEEN FIXED
STATE OF % % DAYS
WEBSITE SECURITY
*Serious vulnerabilities are defined as those in which an attacker could take control over all, or a part, of a website, compromise user accounts, access sensitive data or violate compliance requirements.
MOST COMMON
VULNERABILITIES
30%
EXPOSURE AND CURRENT DEFENSE CURRENT APPLICATION SECURITY BEHAVIORS AND CONTROLS
USED BY ORGANIZATIONS
DAYS OVER A YEAR THAT A SITE IS EXPOSED TO SERIOUS* VULNERABILITIES 100%
80%
60%
40%
49% 22% 12% 7% 10%
20% 67% 50% 83% 67% 34%
Programmers receive instructor led or computer-based software security training
Applications contain a library or framework that centralizes and enforces security controls
48% Always Vulnerable Perform Static Code Analysis on their website(s) underlying applications
22% Frequently Vulnerable 271-364 days a year
12% Regularly Vulnerable 151-270 days a year Web Application Firewall Deployed
7% Occasionally Vulnerable 31-150 days a year Transactional / Anti-Fraud Monitoring System Deployed
10% Rarely Vulnerable 30 days or less a year
THE CURRENT
91 106 54 224
VULNERABILITY PER SITE PER YEAR THAT HAVE BEEN FIXED
STATE OF % % DAYS
WEBSITE SECURITY
*Serious vulnerabilities are defined as those in which an attacker could take control over all, or a part, of a website, compromise user accounts, access sensitive data or violate compliance requirements.
MOST COMMON
VULNERABILITIES
30%
CLASSES Information
Leakage*
Cross Site
Scripting*
Content
Spoofing*
Cross Site
Request
Brute Force* Directory
Indexing*
SQL Injection*
Forgery*
*The percent of sites that had at least one example of...
EXPOSURE AND CURRENT DEFENSE CURRENT APPLICATION SECURITY BEHAVIORS AND CONTROLS
USED BY ORGANIZATIONS
DAYS OVER A YEAR THAT A SITE IS EXPOSED TO SERIOUS* VULNERABILITIES 100%
80%
60%
40%
54% 21% 6% 5% 13%
20% 73% 70% 90% 60% 70%
Programmers receive instructor led or computer-based software security training
Applications contain a library or framework that centralizes and enforces security controls
54% Always Vulnerable Perform Static Code Analysis on their website(s) underlying applications
21% Frequently Vulnerable 271-364 days a year
6% Regularly Vulnerable 151-270 days a year Web Application Firewall Deployed
5% Occasionally Vulnerable 31-150 days a year Transactional / Anti-Fraud Monitoring System Deployed
13% Rarely Vulnerable 30 days or less a year
THE CURRENT
85 18 61 71
VULNERABILITY PER SITE PER YEAR THAT HAVE BEEN FIXED
STATE OF % % DAYS
WEBSITE SECURITY
*Serious vulnerabilities are defined as those in which an attacker could take control over all, or a part, of a website, compromise user accounts, access sensitive data or violate compliance requirements.
MOST COMMON
VULNERABILITIES
30%
CLASSES Cross-Site
Scripting*
Information
Leakage*
Content
Spoofing*
Cross-Site
Request Forgery*
Fingerprinting* Brute Force* URL Redirector
Abuse*
EXPOSURE AND CURRENT DEFENSE CURRENT APPLICATION SECURITY BEHAVIORS AND CONTROLS
USED BY ORGANIZATIONS
DAYS OVER A YEAR THAT A SITE IS EXPOSED TO SERIOUS* VULNERABILITIES 100%
80%
60%
40%
5% 64% 10% 9% 11%
20% 48% 72% 96% 52% 32%
Programmers receive instructor led or computer-based software security training
Applications contain a library or framework that centralizes and enforces security controls
5% Always Vulnerable Perform Static Code Analysis on their website(s) underlying applications
64% Frequently Vulnerable 271-364 days a year
10% Regularly Vulnerable 151-270 days a year Web Application Firewall Deployed
9% Occasionally Vulnerable 31-150 days a year Transactional / Anti-Fraud Monitoring System Deployed
11% Rarely Vulnerable 30 days or less a year
How do we integrate security throughout the software development lifecycle (SDLC)? How do
we measurably improve the security of the software we produce? How do we ensure we are
procuring secure software?
When attempting to address these questions, it is very easy for organizations to throw away
precious time and money. Corporate security teams looking for guidance will often borrow from
various industry standards listing out “best-practices” that are usually just copy-pasted from a
disjointed chain of “experts.” These so-called best-practices might include software security
training for developers, security testing during QA, static code analysis, centralized controls,
Web Application Firewalls, penetration-testing, and others. The term “best-practices” implies
the activity is valuable in every organization at all times. We think most would agree that just
because a certain practice works well for one organization does not mean it will automatically
work well at another.
Of course, we all would like to be able to say, “organizations that provide software security
training for their developers experience 25% fewer serious vulnerabilities annually than those
who do not.” Or, “organizations that perform application security testing prior to each major
production release not only have fewer vulnerabilities year-over-year, but exhibit a 35% faster
Unfortunately, the commonly held assumption is that the listed best-practices above have
somehow been statistically demonstrated to cost-effectively increase software and website
security within any organization – that there is some supporting data-backed evidence, that
there are studies showing vulnerability volumes and breaches go down when these activities are
implemented. The fact is there isn’t any study or data repository to this effect, at least not when it
comes to anything related to website security.
the occurrence and severity of website breaches. The observations in the study are not paired
NOT
be like if they want to keep from getting hacked.
supposed best-practices, even those performed by large and reputable organizations. With
so much at risk every day online, there must be no tolerance for tradition and sacred cows.
Organizations need to better understand how various parts of their SDLC affect the introduction
of vulnerabilities, which leave the door open to breaches. That said, we used BSIMM as a source
of inspiration, a jumping off point to further the collective understanding of what really works in
software and website security.
To move in this direction, during February 2013, we asked WhiteHat Security customers to assist
do you preform security tests on your code during QA? What is your typical rate of production
code change? Do you perform static code analysis? Have you deployed a Web Application
Firewall? Who in your organization is accountable in the event of a breach? We even asked “has
your website been breached?”
We received survey responses from 76 organizations, surpassing that of BSIMM, and then
correlated those responses with company demographics and WhiteHat Sentinel website
vulnerability data. The results were both stunning and deeply puzzling. The connections
between various software security controls and SDLC behaviors and the vulnerability outcomes
and breaches is far more complicated than we ever imagined.
Note: While we are able to correlate security performance relative to the size of an organization
by revenue and numbers of employees, we did not perform that analysis in this report. This may
be an avenue of additional study later on.
asked, “Please select the answer that best describes the typical rate of production application
code change in your organization’s website(s)” and then sorted the results by industry.
(Figure 7) (Figure 8)
(Figure 9)
framework that centralizes and enforces security controls. When results are broken down by
industry, we see all industries making at least some use of them. In Technology and Retail we see
not
Overall, 39% of organizations said they perform some amount of Static Code Analysis on their
websites’ underlying applications. However, when the data is sliced by industry, we see wide
variation of adoption and use of SCA. Financial Services and Retail appear to be making the most
use. Interestingly, 40% of Technology companies and 50% of Healthcare companies are going
without any form of SCA.
many uses, including gaining visibility into incoming website attacks and the ability block them.
Exposed vulnerabilities, which might otherwise be externally exploitable, may be protected with a
WAF. In our experience, WAFs in production can be in a number of deployment states – not all of
which are in active blocking mode. We asked, “Please describe the state of your organizations Web
Overall, 55% of organizations said they have a Web Application Firewall (WAF) in some state
of deployment. As it stands, WAFs are in “monitoring and actively blocking attacks” mode in
nearly one-third of organizations across all industries we have data for. The only exception was
Healthcare, where 17% are actively blocking, but 50% more have WAFs at least monitoring
WAF deployed in some mode. We also know that PCI-DSS regulation has provisions that may
this chart.
(Figure 18)
vulnerabilities they are well aware of. We wanted to better understand these dynamics to get a sense
Proponents of compliance often suggest that mandatory regulatory controls be treated as a “security
it is not the enterprise reality we see. Keep in mind that WhiteHat Sentinel reports are often used to
organizations are required to allocate funds toward compliance, which may or may not enhance
security, there are often no resources left or tolerance by the business to do anything more effective.
see if anything stood out. Here we see Healthcare is heavily motivated by Compliance and Customer
Healthcare websites go unresolved is “No one at the organization understands or is responsible for
maintaining the code.” It seems this sector is challenged by large amounts of legacy code and systems.
Perhaps this explains why we see a large deployment of WAFs in this industry.
answers).
Technology, roughly one in 10 organizations said both have a negative impact “all the time.”
On the other hand, a good number of organizations across all industries said the impact was
As we can see from Figure 28 below, if an organization’s #1 driver for resolving vulnerabilities is
Corporate Policy, they perform the best when it comes to Average Vulnerabilities per Site, second
in Average Time-to-Fix, and fourth in Average Number of Vulnerabilities Fixed (Remediation Rate).
If Compliance is the leading driver, they have the speediest Time-to-Fix. When an organizations’
By a clear margin, those who were breached perform better than those who had not. Analysis here
could go in a number of directions: Did the breach stimulate an increased security performance
to ensure a breach would not happen again? Were these organizations just unlucky? Perhaps
did in their SDLC or how they performed. The answer could be some combination of these, or
hard conclusions, but our intuition tells us that when a breach occurs, the business really starts to
get serious about security.
As anyone working in the industry will tell you, providing software security training for developers
is a long-standing best-practice. The question is: Does it work? Does it measurably improve an
organization’s security metrics and reduce breaches?
Organizations that provided instructor-led or computer-based software security training for their
programmers had 40% fewer vulnerabilities, resolved them 59% faster, but exhibited a 12% lower
remediation rate.
When it comes to security performance metrics, whatever type of training our customers’
programmers are currently receiving, it seems to be working. Clear improvements in vulnerability
is that while developers have a level of control over the number of vulnerabilities they produce
Centralized software security control holds the promise of improving security metrics by making
things easier on developers. Each developer working on a project no longer has to create their
own custom code for dealing with authentication, authorization, database access, input validation,
First and foremost, it’s important to point out that an organization may not have a comprehensive
centralized control system in place. It is possible they only have one or two controls centralized,
it means to “centralize” them ahead of asking the question). That variation cannot be discounted.
think they had centralized security controls, and
from that standpoint, it is enlightening.
Even with this in mind, we were quite stunned to see that those who supposedly had at least
didn’t – with the exception of the remediation rate. Perhaps what we are dealing with is an over-
identify a large number of vulnerabilities in each place they are used. And since the controls are
relied upon everywhere, any updates will require extensive QA or run the risk of breaking a large
number of dependent applications.
We want to make sure that we do not discourage the use of centralized security controls. We
believe the concept has a tremendous amount of value. However, proper care must be taken so
that they are comprehensive, effective, tested thoroughly, and measured over time.
Do those utilizing centralized security controls suffer fewer (or more) website breaches over those
who do not? As with developer training, the data doesn’t indicate that centralized controls make
much of a difference in either direction. Again, perhaps what works is a combination of factors.
Perhaps that factor is the amount of pre-production security testing.
(Figure 34)
Organizations that performed Static Code Analysis on their website(s) underlying applications had
15% more vulnerabilities, resolved them 26% slower, and had a 4% lower remediation rate.
As we saw in the results from centralized security controls and pre-production testing, those
organizations saying they perform some amount of SCA actually show worse security metrics
in every category. This measurement was extremely surprising – even shocking. Could it be an
indictment of this class of testing? It could be, but we really don’t think so. More likely the chosen
SCA solution is identifying are not the same vulnerabilities that lead to website exploitation. This
is a far more likely outcome. And one other possibility is that the solution was purchased, but not
fully utilized and integrated in the SDLC.
(Figure 35)
Whatever the case may be for why SCA exhibits worse security metrics, the breach metrics
correlation shows no difference – just like every other best-practice we’ve correlated so far.
The Web Application Firewall (WAF) is the last place we’ll look for answers as to what might
help improve an organizations website security metrics. WAFs have the ability to block incoming
Organizations with a Web Application Firewall deployment had 11% more vulnerabilities,
resolved them 8% slower, and had a 7% lower remediation rate.
The percentages are small, but it does not appear WAF deployments are making websites
blocking mode. Or perhaps what they are blocking needs to be more precise and actually block
vulnerabilities the website has. It could also be that when a vulnerability is found, the organization
needs to assign more resources to managing them to get the most out of a WAF.
(Figure 36).
And we should be used to it by now, but like everything else we’ve looked at, those with a WAF
deployment fare no better or worse than those without.
Overall, the breach correlation analysis was certainly enlightening, but also disappointing. We
statistically improves security metrics AND decreases breaches. The closest we got was software
security training for developers. Maybe that’s the only “best-practice” that really did seem to work
everywhere. However, no matter what you do, it doesn’t seem to protect you from a security
breach.
We don’t want to leave it here, though. What we did see in the data is that many organizations
are clearly deriving value from other things such as centralized controls, pre-production testing,
SCA, and WAFs – while for others, there is clearly no value. The average was somewhere in-
between. What this tells us is that these solutions can work, certainly they can help, but only when
implemented in the right way, at the right time, targeted at the right problem. In the next section,
we hope to get at some of these answers.
If there is one chart in the entire report that really opened our eyes, it is Figure 38. There is a
huge delta between accountability of those who said they were breached and those who haven’t
part of the organization, including the executive management and board of directors, showed
similar results. Combine this with the other charts in the section and the evidence is strong:
Accountability matters a great deal.
Accountability could be the underlying factor that allows SDLC activities such as training for
developers, centralized controls, pre-production testing, SCA, WAFs, etc. to improve security
performance metrics and decrease breaches. Accountability may be the difference between truly
adopting and internalizing the activity and simply treating it as a time-wasting checkbox. The
answer to what works in application and website security may certainly contain a technique and
process level answer, but must include the support of an organizational mandate. The theory
certainly sounds plausible.
RECOMMENDATIONS
First and foremost, we believe the data contained in this report provides supportive evidence
of something most information security professionals already instinctively know: accountability
is an absolute necessity for an effective security program. If web and software security are
truly important to an organization, then someone in the organization must not only be held
accountable for security performance, but they must also be empowered to affect change.
Accountability may begin at as high a level as the board of directors, which is often the case
the organization: through executive management to security teams to software developers, and
even through to the software procurement department. Without a foundation of accountability,
even the best-laid plans will fail. The data leads us in this direction. With accountability, there
will be a tangible return on investment regardless of what best-practices an organization might
implement. Furthermore, high-end security talent appreciates this kind of responsibility and
views it as a career advancement opportunity.
Secondly, there are no best-practices. Our data and experience strongly suggest that security
“best-practices” in the software development lifecycle (SDLC) are not universally effective, but
are situation-dependent. Whether referring to source code reviews, security testing during QA,
Web Application Firewalls, or other methods, each has an appropriate time and place. The
challenge in increasing the organization’s security posture cost-effectively is determining what to
recommend, what to implement, what to purchase, and when. Security teams must get this right.
The tactical key to improving a web security program is having a comprehensive metrics
program in place – a system capable of performing ongoing measurement of the security posture
of production systems, exactly where the proverbial rubber meets the road. Doing so
Below is a step-by-step strategy for building out a website security program that yields results:
0) Assign an individual or group that is accountable for website security: These individuals or
groups may include the board of directors, executive management, security teams, and software
developers. They should be commissioned and authorized to establish a culturally consistent
incentives program that will help move the organization in a positive direction with respect to
security.
1)
the organization deems important. Knowing what systems need to be defended and what
value they have to the organization provides a barometer for an appropriate level of security
investment.
2)
just about identifying vulnerabilities; while that is a byproduct of the exercise, it’s about
understanding what classes of adversaries you need to defend against and what your current
what’s important.
vulnerabilities are introduced per production code release, what vulnerability classes are most
is built on a Software-as-a-Service (SaaS) platform that scales massively, supports the largest
website security.
proprietary scanning technology with custom testing by the industry’s only Threat Research
many of the
on the Web, owned by organizations that care about the security of their websites.
under service.
schedule, the majority of websites are assessed for vulnerabilities multiple times per month.
1.
brochureware.
2. The customer-base is largely, but not exclusively, US-based, as are their websites.
exploited in more than one way, each of those are counted as well. We count this way because a
vulnerability in each parameter may actually lead to a different problem in a different part of the
code.
Only serious* vulnerabilities that can be directly and remotely exploitable and that may lead to data
loss or account compromise are included.
For example, if a website mixes SSL content
with non-SSL on the same Web page, while this may be considered a policy violation, it must be
7. It is best to view this report as a best-case scenario, and there are always more vulnerabilities to
be found.
9.
vulnerabilities.
The Platform
The Methodology
Scanning Technology
• Production Safe (PE, SE, BE): Non-invasive testing with less performance impact than a single user.
•
•
• Authenticated Scans: Patented automated login and session-state management for complete website
coverage.
•
authorization component.
APPENDIX B