CD & DevOps On Security Compliance PDF
CD & DevOps On Security Compliance PDF
1
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Table of Contents
Acknowledgement .....................................................................................................................................................................4
Executive summary ...................................................................................................................................................................5
Concepts of CI/CD and (Sec)DevOps.................................................................................................................................6
Key Terms ................................................................................................................................................................................... 10
Problem Definition and Research Approach .............................................................................................................. 18
Problem statement ............................................................................................................................................................. 18
Scope ......................................................................................................................................................................................... 19
Research Question .............................................................................................................................................................. 20
Research Methodology ..................................................................................................................................................... 21
Theoretical Foundation ................................................................................................................................................... 23
Literature Review ............................................................................................................................................................... 29
Defining the Artifact ............................................................................................................................................................... 32
ISO/IEC 27001 and NIST SP 800-53 relevant Control Objectives & Controls ....................................... 32
(Sec)DevOps Capability Artifact .................................................................................................................................. 49
Evaluating Artifact .................................................................................................................................................................. 67
Introduction .......................................................................................................................................................................... 67
Subject matter expert interviews................................................................................................................................ 67
Result comparison with DoD Enterprise DevSecOps Reference Design Report .................................. 70
Conclusion................................................................................................................................................................................... 76
Research Findings .............................................................................................................................................................. 76
Research Limitations ........................................................................................................................................................ 78
Future Research .................................................................................................................................................................. 78
References................................................................................................................................................................................... 80
Annex A. Overview of Major Cybersecurity & privacy-related frameworks .............................................. 84
Annex B. Interview Transcripts........................................................................................................................................ 93
Interview 1 ............................................................................................................................................................................. 93
Interview 2 ............................................................................................................................................................................. 96
Interview 3 ............................................................................................................................................................................. 99
Interview 4 .......................................................................................................................................................................... 102
Interview 5 .......................................................................................................................................................................... 107
Interview 6 .......................................................................................................................................................................... 111
Interview 7 .......................................................................................................................................................................... 114
Interview 8 .......................................................................................................................................................................... 118
2
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
3
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Acknowledgement
The famous idiom “Time flies when you are having fun” is also applicable in this case. It has been
almost two years since the start of my IT Governance Master in Corsendonk, and despite the recent
world-wide epidemic overshadowing the graduation, these was beyond a doubt a period full of new ex-
periences, interesting people and deep self-reflection. Following a Master program while being in the
middle of one’s professional career does not only provide knowledge about the study domain, but also
knowledge of oneself as a person.
In this light, I would like to express my gratitude to my promotor Professor Yuri Bobbert, who
shared with me his sharp insight into the domain of my research, guided me through all the difficult de-
cision moments and even provided me suggestions on how the result of this research can be valorised.
Next to my promotor, a number of people significantly contributed to the research, by openly sharing
their expertise and opinion through a set of interviews. I would like to thank Steven Bradley, Lieven Va-
nuytfanghe, Stefan D’Hauwe, Manu Boudewyn, Tim Beyens, Clarence Pinto, Leon Kortekaas and Barry
Derksen for voluntarily sharing their knowledge with the community.
And last but not least, thank you to Filip, Helena and Elise for your support and patience. I know
that it was not always easy and the fact that this book got written is largely thanks to you!
Maria Chtepen
March 2020
4
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Executive summary
This Master Thesis studies the impact concepts such as CI/CD and SecDevOps have on security compli-
ance of large strongly regulated organisations. CI/CD and SecDevOps rely on the Agile principles for Soft-
ware Development & Deployment and value early and continuous software delivery; changing require-
ments; close collaboration between business, customers and developers; and face-to-face communica-
tion as an alternative for extensive documentation. At the same time, regulations often impose on the
organisations activities such as integrations, software and security architecture. These activities are as-
sumed to require a full overview and rigorous up-front planning to guarantee robustness and fewer se-
curity holes.
This Thesis address the question whether DevOps is a benefit to achieving security compliance or com-
pliance is an obstacle to realizing DevOps. To answer the above-mentioned research questions a meth-
odology named Design Science Research in the scientific literature, is selected. The primary differentiator
of this methodology, compared to other research methods, is that its focus lies with the design and inves-
tigation of artifacts in a specific context, making use of the existing knowledge base.
The artifact designed in the scope of this research is named the SecDevOps Capability Artifact. It
maps governance and security control objectives impacted by DevOps to the corresponding DevOps con-
trol objectives. These DevOps objectives introduce either an Opportunity or a Risk for the achievement
of the security & governance control objectives. Finally, the Artifact defines a list of SecDevOps controls
that have proven to be effective in combining the agility of the DevOps paradigm with the security com-
pliance assurance.
To design this Artefact, four widely-used frameworks / standards (COBIT 5, NIST Cybersecurity
Framework, NIST SP 800-53 and ISO 27002) were reviewed for sufficiently detailed security and privacy
control objectives and controls. Based on these criteria, NIST SP 800-53 and ISO 27002 standards were
selected for comparison and mapping with (Sec)DevOps controls in this research.
The major findings of the research suggest the controls that allow to incorporate SecDevOps into
the organisation, which traditionally builds its processes around compliance requirements. The controls
suggest how to satisfy these requirements without sacrificing too much on the flexibility and speed that
form the major advantage of SecDevOps in first place. The most important research findings can be sum-
marized as follows:
• Part of the tasks of the Security department should be moved to the SecDevOps teams.
• Segregation of duties becomes less “a people job”, and more an automated process.
• SecDevOps requires new standards for software design: instead of releasing changes, prod-
ucts will be released instead.
• Finally, automation is the core of many recommendations, meaning that the full gain of
SecDevOps can only be achieved when the majority of tasks can be done within the appropri-
ate tooling, linked into an automatic CI/CD pipeline.
The SecDevOps Capability Artifact is validated by means of an extensive academic literature review and
interviews with multiple domain experts and practitioners. Finally, an additional validation was per-
formed by comparing the findings of this study with high-level implementation and operational guidance
of the DoD Enterprise DevSecOps Reference Design report. The report has as a purpose to describe the
DevSecOps lifecycle and supporting pillars, in line with NIST Cybersecurity Framework, which is a high-
level framework building upon specific controls and processes defined by NIST SP 800-53, COBIT 5 and
ISO 27000 series.
5
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Figure 1. Waterfall versus Agile software development method (Derksen, October 2018).
The main disadvantages of this approach are (1) the need for complete detailed requirements to
be known upfront, before the start of software development, and (2) the fact that the software product is
only seen when the development is completed. It means that the project cannot take off before all the
6
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
SEC
requirements are collected, and often the product is delivered after a long development process to only
discover that the requirements were incomplete or misinterpreted.
The Agile approach is the extreme opposite of the original sequential software development. It
values early and continuous software delivery; changing requirements; close collaboration between busi-
ness, customers and developers; and face-to-face communication as an alternative for extensive docu-
mentation. Figure 1 shows a graphical representation of a typical Agile software development cycle,
where the project is split into short iterations. There is a deliverable and a feedback loop at the end of
each iteration.
But also the Agile methodology has its drawbacks. For instance, detailed planning and cost estima-
tion are major pain points. Since the requirements are refined after each iteration, it is very difficult to
predict the scope and the outcome in advance and it is almost impossible to indicate how much it is going
to cost to achieve a certain output. Therefore, Agile is better suited for projects with the following char-
acteristics (Murray, 2015):
At the same time, activities such as integrations, software and security architecture are often
assumed to require a full overview and rigorous up-front planning to guarantee robustness and fewer
security holes. The lack of certainty regarding these aspects can, in my experience, disqualify Agile for
many organisations.
7
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Figure 3. New work vs. unplanned work of High / Low DevOps performers (Forsgren, 2018).
Scientific literature describes a number of approaches to address the inherent difficulty of integrat-
ing security in Agile software development. (Chivers, 2005) argue that security can grow organically
within an agile project by using incremental security architecture that evolves with the code. On the
one hand, unlike conventional security architectures, an incremental architecture remains true to agile
principles by including only the essential features required for the current iteration – it does not try to
predict future requirements. On the other hand, it preserves the link between local functions and system
properties, and it provides a basis for an ongoing review of the system from the security prospective.
Intel (Harkins, 2013) adopts the incremental security architecture approach by identifying four capabil-
ities, or cornerstones, to achieve higher agility and dynamic architecture: trust calculation handles user
identity and access management by dynamically determining what type of access (if any) a user should
be granted to a resource; security zones provide different levels of protection to different types of data,
depending on its criticality; balanced controls fulfil the need for a combination of detective, corrective
and preventive controls (e.g. firewalls); user and data perimeters allow to treat users and data as addi-
tional security perimeters and protect them accordingly.
By the end of 2000, the Agile paradigm has extended its reach into many related fields, including
product management, operations, organisational culture and learning, as well as IT infrastructure (Betz,
2016). DevOps (Development and Operations) is a movement in the IT community that uses agile/lean
techniques to add value by increasing collaboration between the development and operations staff. This
can take place at any time in the development life cycle when creating, operating, or updating an IT ser-
vice. By using agile/lean, it allows for IT services to be updated continuously so the speed of delivery can
dramatically increase while stability is improved (Colavita, 2016).
The main driving force for DevOps’ wide adoption, is the use of less resources for software devel-
opment and maintenance, which is achieved through automation (CA Technologies, 2014) (Delphic,
2016). It is the core activity in the DevOps practices – the continuous integration & delivery (CI/CD)
(Sharma, 2015) (Bucena, 2017). Figure 2 shows activities (toolchain), as defined by Gartner, with which
a set of DevOps tools to aid in the delivery, development and management of applications through the
8
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
system development lifecycle. The research by (Forsgren, 2018) has shown that continuous integration
& delivery predicts lower levels of unplanned work and rework in a statistically significant way. It was
found that the amount of time spent on new work and unplanned work or rework was significantly dif-
ferent between CICD high performing companies and low performers. The differences are shown in Fig-
ure 3.
Also with the introduction of DevOps concerns remained how DevOps impacts security aspects
of the developed software. In fact, security is among the major concerns that limit the adoption of
DevOps processes (Mohan V. O., 2016) (CA Technologies, 2014). This triggers the coining of the terms
SecDevOps or DevSecOps. Both refer to incorporating security practices in the DevOps processes by
promoting collaboration between the development teams, the operations teams and the security
teams.
A number of security practices for DevOps are described in literature. (Farroha, 2014) (Schneider,
2015) refer to the automation practices for integration into SecDevOps, such as automating testing to
detect noncompliance, tracking compliance breaches through automated reporting of violations, contin-
uous monitoring and maintenance of a service catalogue with tested and certified services, security scan-
ning, configuration automation, etc. (Storms, 2015) believes that DevOps fails to include security
throughout the process and leaves it to the end. The security problems due to DevOps that Storms lists
include the high pace of deployments, the unclear access restrictions, and the lack of audit and control
points. Finally, (Mattetti, 2015) and (Bass, 2015) approach security in DevOps through, respectively, plat-
form and application hardening, while (Farroha, 2014) and (Storms, 2015) identified a set of tools that
could be integrated into DevOps to support security, monitoring, and logging.
When looking at different solutions proposed in literature to address security concerns in the agile
software development, the question arises: how can we determine which controls and mechanisms are
the most suitable and effective for a particular organisation?
9
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Key Terms
Before proceeding with the discussion of the subject, it is important to agree on the terms and def-
initions used. Since there are multiple interpretations of the same term in the literature, for consistency
reasons, definitions from the Department of Defense (DoD) Enterprise DevSecOps Reference Design re-
port (Lam, 2019) are used, if not indicated otherwise. The reason for this choice is that the DoD report is
used for the validation of the findings of this research in the later chapters of this document.
Term Definition
Access Means to ensure that access to assets is authorized and restricted based on business and
Control security requirements (ISO/IEC 27000:2014(E)).
Advanced An adversary that possesses sophisticated levels of expertise and significant resources
Persistent which allow it to create opportunities to achieve its objectives by using multiple attack
Threat vectors (e.g., cyber, physical, and deception). These objectives typically include estab-
(APT) lishing and extending footholds within the information technology infrastructure of the
targeted organisations for purposes of exfiltrating information, undermining or imped-
ing critical aspects of a mission, program, or organisation; or positioning itself to carry
out these objectives in the future. The advanced persistent threat: (i) pursues its objec-
tives repeatedly over an extended period of time; (ii) adapts to defenders’ efforts to re-
sist it; and (iii) is determined to maintain the level of interaction needed to execute its
objectives (NIST.SP.800-53r4).
Agile A software development approach that is opposite to the original sequential software
development. It values early and continuous software delivery; changing requirements;
close collaboration between business, customers and developers; and face-to-face com-
munication as an alternative for extensive documentation. Agile project is split into short
iterations. There is a deliverable and a feedback loop at the end of each iteration (Mur-
ray, 2015).
Audit Log A chronological record of information system activities, including records of system ac-
cesses and operations performed in a given period (NIST.SP.800-53r4).
Audit Rec-
An individual entry in an audit log related to an audited event (NIST.SP.800-53r4).
ord
Authenti- Verifying the identity of a user, process, or device, often as a prerequisite to allowing
cation access to resources in an information system (NIST.SP.800-53r4).
Availabil-
Ensuring timely and reliable access to and use of information (NIST.SP.800-53r4).
ity
10
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Change Change Management seeks to minimize the risk associated with Changes, where ITIL de-
Manage- fines a Change as "the addition, modification of removal of anything that could have an
ment effect on IT services". This includes Changes to the IT infrastructure, processes, docu-
ments, supplier interfaces, etc. (ITIL 4).
Capability Capability is a measure of the ability of an entity (department, organization, person, sys-
tem) to achieve its objectives, especially in relation to its overall mission (www.busi-
nessdictionary.com/definition/capability.html).
CI/CD The set of tools and the associated process workflows to achieve continuous integration
Pipeline and continuous delivery with build, test, security, and release delivery activities, which
are steered by a CI/CD orchestrator and automated as much as practice allows.
CI/CD A single process workflow and the tools to execute the workflow for a specific software
Pipeline language and application type. As much of the pipeline process is automated as is prac-
Instance ticable.
Cloud The use of computing resources — servers, database management, data storage, net-
computing working, software applications, and special capabilities such as blockchain and artifi-
/ Cloud cial intelligence (AI) — over the internet, as opposed to owning and operating those re-
sources yourself, on premises.
Compared to traditional IT, cloud computing offers organisations a host of benefits: the
cost-effectiveness of paying for only the resources you use; faster time to market for
mission-critical applications and services; the ability to scale easily, affordably and —
with the right cloud provider — globally; and much more). And many organisations are
seeing additional benefits from combining public cloud services purchased from a
cloud services provider with private cloud infrastructure they operate themselves to
deliver sensitive applications or data to customers, partners and employees.
There are 4 well-known types of Cloud services:
• Infrastructure as a Service (IaaS): the original cloud computing service,
which provides foundational computing resources — physical or virtual serv-
ers, operating system software, storage, networking infrastructure, data centre
space — that you use over an internet connection on a pay-as-you-use basis.
IaaS lets you rent physical IT infrastructure for building your own remote data
centre on the cloud, instead of building a data centre on premises.
• Platform as a Service (PaaS): provides a complete cloud-based platform for
developing, running and managing applications without the cost, complexity
and inflexibility of building and maintaining that platform on premises. The
PaaS provider hosts everything — servers, networks, storage, operating system
software, databases — at its data centre. Development teams can use all of it for
a monthly fee based on usage, and can purchase more resources on demand, as
needed. With PaaS you can deploy web and mobile applications to the cloud in
minutes, and innovate faster and more cost-effectively in response to market
opportunities and competitive threats.
• Serverless Computing: is a hyper-efficient PaaS, differing from conventional
PaaS in two important ways:
o Serverless offloads all responsibility for infrastructure management
tasks (scaling, scheduling, patching, provisioning) to the cloud provider,
allowing developers to focus all their time and energy on code.
o Serverless runs code only on demand — that is, when requested by the
application, enabling the cloud customer to pay for compute resources
only when they code is running. With serverless, you never pay for idle
computing capacity.
11
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• Software as a Service (SaaS): application software that runs in the cloud, and
which customers use via internet connection, usually in a web browser, typi-
cally for a monthly or annual fee. SaaS is still the most widely used form of
cloud computing. SaaS lets start using software rapidly: just sign up and get to
work. It lets you access your specific instance of the application and your data
from any computer, and typically from any mobile device. If your computer or
mobile device breaks, you don’t lose your data (because it’s all in the cloud).
The software scales as needed. And the SaaS vendor applies fixes and updates
without any effort on your part. (https://www.ibm.com/cloud/learn/cloud-com-
puting ).
Code Software instructions for a computer, written in a programming language. These instruc-
tions may be I the form of either human-readable source code, or machine code, which
is source code that has been compiled into machine executable instructions.
Common
Secure A recognized standardized and established benchmark that stipulates specific secure
Configura- configuration settings for a given information technology platform (NIST.SP.800-53r4).
tion
Configura- A collection of activities focused on establishing and maintaining the integrity of infor-
tion Man- mation technology products and information systems, through control of processes for
agement initializing, changing, and monitoring the configurations of those products and systems
throughout the system development life cycle (NIST.SP.800-53r4).
Configura-
Set of parameters that can be changed in hardware, software, or firmware that affect the
tion Set-
security posture and/or functionality of the information system (NIST.SP.800-53r4).
tings
Contain- A standard unit of software that packages up code and all its dependencies, down to, but
ers not including the Operating System (OS). It is a lightweight, standalone, executable pack-
age of software that includes everything needed to run an application except the OS:
code, runtime, system tools, system libraries and settings.
Several containers can run in the same OS without conflicting with one another. Since
they run on the OS, no hypervisor (virtualization) is necessary (though the OS itself may
be running on a hypervisor).
Containers are much smaller than a VM, typically by a factor of 1,000 (MB vs GB), partly
because they don’t need to include the OS. Using containers allows denser packing of
applications than VMs.
Unlike VMs, containers are portable between clouds or between cloud and on-premise
servers. This helps alleviate Cloud Service Provider (CSP) lock-in, though an application
may still be locked-in to a CSP, if it uses CSP-specific services.
Containers also start much faster than a VM (seconds vs. minutes), partly because the OS
doesn’t need to boot.
12
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Control A Control Objective is an assessment object that defines the risk categories for a Pro-
Objective cess or Sub-Process.
Control Objectives define the compliance categories that the Controls are intended to
mitigate. Control Objectives can be classified into categories such as Compliance, Finan-
cial Reporting, Strategic, Operations, or Unknown.
After a Control Objective is identified, the Risks belonging to that Control Objective can
then be defined. In most cases, each Control Objective has one Risk that is associated
with it. However, it might also have more than one Risk. For example, a financial ser-
vices company employs traders that are aware of the required ethical standards. The
HR department sets up a control objective called 'Personnel'. A risk that is associated
with the Control Objective is, "Employees engage in business dealings that conflict with
the company objectives for ethical and fair trading." (IBM Knowledge Center,
https://www.ibm.com/support/knowledge-
center/SSFUEU_7.3.0/com.ibm.swg.ba.cognos.op_app_help.7.3.0.doc/c_about_ctrl_obj.htm
l ).
Cyber At- An attack, via cyberspace, targeting an enterprise’s use of cyberspace for the purpose of
tack disrupting, disabling, destroying, or maliciously controlling a computing environ-
ment/infrastructure; or destroying the integrity of the data or stealing controlled infor-
mation (NIST.SP.800-53r4).
Cyber Se-
The ability to protect or defend the use of cyberspace from cyber-attacks (NIST.SP.800-
curity
53r4).
DevOps DevOps is an approach which streamlines interdependencies between development and
operations through a set of protocols and tools. DevOps facilitates an enhanced degree
of agility and responsiveness through continuous integration, continuous delivery, and
continuous feedback loops between Development teams and Operation teams (McCar-
thy M. A., 2014).
Enterprise A strategic information asset base, which defines the mission; the information necessary
Architec- to perform the mission; the technologies necessary to perform the mission; and the tran-
ture sitional processes for implementing new technologies in response to changing mission
needs; and includes a baseline architecture; a target architecture; and a sequencing plan
(NIST.SP.800-53r4).
Event
Any observable occurrence in an information system (NIST.SP.800-53r4).
External
Infor- A provider of external information system services to an organisation through a variety
mation of consumer-producer relationships including but not limited to: joint ventures; busi-
System ness partnerships; outsourcing arrangements (i.e., through contracts, interagency agree-
Service ments, lines of business arrangements); licensing agreements; and/or supply chain ex-
Provider changes (NIST.SP.800-53r4).
External
Network A network not controlled by the organisation (NIST.SP.800-53r4).
Hardware
The physical components of an information system (NIST.SP.800-53r4).
Impact The effect on organisational operations, organisational assets, individuals, other organi-
sations, or the Nation (including the national security interests of the United States) of a
loss of confidentiality, integrity, or availability of information or an information system
(NIST.SP.800-53r4).
13
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Infor-
mation The intentional or unintentional release of information to an untrusted environment
Leakage (NIST.SP.800-53r4).
Infor-
The protection of information and information systems from unauthorized access, use,
mation Se-
disclosure, disruption, modification, or destruction in order to provide confidentiality,
curity
integrity, and availability (NIST.SP.800-53r4).
Infor-
An embedded, integral part of the enterprise architecture that describes the structure
mation Se-
and behaviour for an enterprise’s security processes, information security systems, per-
curity Ar-
sonnel and organisational subunits, showing their alignment with the enterprise’s mis-
chitecture
sion and strategic plans (NIST.SP.800-53r4).
Infor- The risk to organisational operations (including mission, functions, image, reputation),
mation Se- organisational assets, individuals, other organisations, and the Nation due to the poten-
curity Risk tial for unauthorized access, use, disclosure, disruption, modification, or destruction of
information and/or information systems (NIST.SP.800-53r4).
Infor- Any equipment or interconnected system or subsystem of equipment that is used in the
mation automatic acquisition, storage, manipulation, management, movement, control, display,
Technol- switching, interchange, transmission, or reception of data or information by the execu-
ogy tive agency. For purposes of the preceding sentence, equipment is used by an executive
agency if the equipment is used by the executive agency directly or is used by a contrac-
tor under a contract with the executive agency which: (i) requires the use of such equip-
ment; or (ii) requires the use, to a significant extent, of such equipment in the perfor-
mance of a service or the furnishing of a product. The term information technology in-
cludes computers, ancillary equipment, software, firmware, and similar procedures, ser-
vices (including support services), and related resources (NIST.SP.800-53r4).
Integrity Guarding against improper information modification or destruction, and includes ensur-
ing information non-repudiation and authenticity (NIST.SP.800-53r4).
Internal A network where: (i) the establishment, maintenance, and provisioning of security con-
Network trols are under the direct control of organisational employees or contractors; or (ii)
cryptographic encapsulation or similar security technology implemented between or-
ganisation-controlled endpoints, provides the same effect (at least with regard to confi-
dentiality and integrity). An internal network is typically organisation-owned, yet may
be organisation-controlled while not being organisation-owned (NIST.SP.800-53r4).
Local Ac- Access to an organisational information system by a user (or process acting on behalf of
cess a user) communicating through a direct connection without the use of a network
(NIST.SP.800-53r4).
14
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Organisa-
An entity of any size, complexity, or positioning within an organisational structure (e.g.,
tion
a federal agency or, as appropriate, any of its operational elements) (NIST.SP.800-53r4).
Penetra-
A test methodology in which assessors, typically working under specific constraints, at-
tion Test-
tempt to circumvent or defeat the security features of an information system
ing
(NIST.SP.800-53r4).
Privileged
An information system account with authorizations of a privileged user (NIST.SP.800-
Account
53r4).
Privileged
A user that is authorized (and therefore, trusted) to perform security-relevant functions
User
that ordinary users are not authorized to perform (NIST.SP.800-53r4).
Risk A measure of the extent to which an entity is threatened by a potential circumstance or
event, and typically a function of: (i) the adverse impacts that would arise if the circum-
stance or event occurs; and (ii) the likelihood of occurrence. Information system-related
security risks are those risks that arise from the loss of confidentiality, integrity, or avail-
ability of information or information systems and reflect the potential adverse impacts
to organisational operations (including mission, functions, image, or reputation), organ-
isational assets, individuals, other organisations, and the Nation (NIST.SP.800-53r4).
Risk As- The process of identifying risks to organisational operations (including mission, func-
sessment tions, image, reputation), organisational assets, individuals, other organisations, and the
Nation, resulting from the operation of an information system. Part of risk management,
incorporates threat and vulnerability analyses, and considers mitigations provided by
security controls planned or in place. Synonymous with risk analysis (NIST.SP.800-
53r4).
Risk Man- The program and supporting processes to manage information security risk to organi-
agement sational operations (including mission, functions, image, reputation), organisational as-
sets, individuals, other organisations, and the Nation, and includes: (i) establishing the
context for risk-related activities; (ii) assessing risk; (iii) responding to risk once deter-
mined; and (iv) monitoring risk over time (NIST.SP.800-53r4).
Risk Miti- Prioritizing, evaluating, and implementing the appropriate risk reducing controls/coun-
gation termeasures recommended from the risk management process (NIST.SP.800-53r4).
Role- Access control based on user roles (i.e., a collection of access authorizations a user re-
Based Ac- ceives based on an explicit or implicit assumption of a given role). Role permissions may
cess Con- be inherited through a role hierarchy and typically reflect the permissions needed to
trol perform defined functions within an organisation. A given role may apply to a single in-
dividual or to several individuals (NIST.SP.800-53r4).
SecDevOp DevSecOps strives to automate core security tasks by embedding security controls and
s / processes into the DevOps workflow. DevSecOps originally focused primarily on auto-
DevSecOp mating code security and testing, but now it also encompasses more operations-centric
s controls. Security can benefit from automation by incorporating logging and event mon-
itoring, configuration and patch management, user and privilege management, and vul-
nerability assessment into DevOps processes. In addition, DevSecOps provides security
15
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
practitioners with the ability to script and monitor security controls at a much larger and
more dynamic scale than traditional in-house data centres (Shackleford, D., SANS, 2016).
Security A condition that results from the establishment and maintenance of protective
measures that enable an enterprise to perform its mission or critical functions despite
risks posed by threats to its use of information systems. Protective measures may in-
volve a combination of deterrence, avoidance, prevention, detection, recovery, and cor-
rection that should form part of the enterprise’s risk management approach
(NIST.SP.800-53r4).
Security Can be effectively used to protect information and information systems from tradi-
Control tional and advanced persistent threats in varied operational, environmental, and tech-
nical scenarios. The controls can be used to demonstrate compliance with a variety of
governmental, organisational, or institutional security requirements. Organisations
have the responsibility to select the appropriate security controls, to implement the
controls correctly, and to demonstrate the effectiveness of the controls in satisfying es-
tablished security requirements.
The security controls facilitate the development of assessment methods and proce-
dures that can be used to demonstrate control effectiveness in a consistent/repeatable
manner—thus contributing to the organisation’s confidence that security requirements
continue to be satisfied on an ongoing basis. In addition, security controls can be used
in developing overlays for specialized information systems, information technologies,
environments of operation, or communities of interest (NIST.SP.800-53r4).
Security
Confidentiality, integrity, or availability (NIST.SP.800-53r4).
Objective
Security
A set of criteria for the provision of security services (NIST.SP.800-53r4).
Policy
Software Computer programs and associated data that may be dynamically written or modified
during execution (NIST.SP.800-53r4).
Threat Any circumstance or event with the potential to adversely impact organisational opera-
tions (including mission, functions, image, or reputation), organisational assets, individ-
uals, other organisations, or the Nation through an information system via unauthorized
access, destruction, disclosure, modification of information, and/or denial of service
(NIST.SP.800-53r4).
Threat As- Formal description and evaluation of threat to an information system (NIST.SP.800-
sessment 53r4).
16
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
17
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
One of the main problems that characterizes this misfit is the merging of development and opera-
tions in DevOps. Developers are assigned operational responsibilities such as debugging running produc-
tion systems, but traditional compliance controls restrict access to production environments for devel-
opers (Michener, 2016). Multiple scholars therefore advocate for a hybrid environment in which the
DevOps process is integrated into the specific environment as much as possible but stays restricted by
applicable regulations (Yasar, 2017). Also (Mohan V. O., 2018) reveals that the main security concerns
for DevOps automation are: separation of roles, enforcement of access controls, manual security tests,
audit, security guidelines, management of security issues, and participation of the security team. The ma-
jor recommended best practices for a transformation of current processes to SecDevOps are: good docu-
mentation and logging, strong collaboration and communication, automation of processes, and enforce-
ment of separation of roles.
DevOps and automation often go together with the use of Cloud infrastructure. A major part of the
risk inherent to any cloud scenario is the security posture of the cloud provider. Most reputable cloud
providers offer a variety of controls attestation documents, such as the SSAE 16 SOC 2, ISO 27001 and
ISO 27002 reports, or a report on the Cloud Security Alliance Cloud Controls Matrix (CCM) (Shackleford,
D., SANS, 2016). Security teams should review this documentation carefully when choosing a cloud pro-
vider to decide whether cloud deployment is compliant with the requirements of their organisation.
While the examples mentioned above show compliance as an obstacle to deploying an efficient and
automated DevOps process, (Laukkarinen T. K., May 2017) use the example of medical device and health
software IEC/ISO standards to show that DevOps can in certain cases also be used as a helpful tool to
ensure compliance (Plant, 2019). They found that DevOps was beneficial for implementing most require-
ments. For example, clause 5.8.6 of IEC 62304 for medical device software requires that the procedure
and environment of the software creation has to be documented. In DevOps, this can easily be done with
development tools such as the project management tool JIRA, source code repositories such as GIT and
automation software such as Jenkins. Furthermore, using invariable Docker containers allows for a re-
peatable installation and release process which is required by clause 5.8.8. However, the authors also
identified three obstacles that slow down the CI and CD procedures. Firstly, software units have to be
verified, which means that Continuous Integration can only happen after all units have passed unit test-
ing. Secondly, all tasks and activities such as unfinished documentation have to be completed before the
release of a software unit. Lastly, Continuous Deployment through remote updating to the customer is
18
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
not possible with IEC 82304-1 because the responsibility has to be transferred explicitly to the customer
when taking the software into use.
(Laukkarinen T. K., 2018) concluded in a follow-up paper on DevOps in regulated software envi-
ronments that tighter integration between development tools, requirements management, version con-
trol and the deployment pipeline would aid the creation of regulatory compliance development practices.
However, the authors also note that regulations and accompanied standards could be improved to better
relate regulations with DevOps practices. On the other hand, (Farroha, 2014) suggests that to integrate
compliance and security throughout a deployed cloud application, automated testing for non-compliance
and policy needs to be leveraged. Additionally, the enterprise should pre-build, test and certify services
in a Services Catalog. To ensure compliance and security rules are adhered to, continuous monitor-
ing/alerting and automation to detect and mitigate critical issues must be implemented. Finally, to track
breaches in compliance, the application will need to do the following: a) automatic reporting for compli-
ance violations; b) terminate access when exceeding a threshold; c) initiate alarms when new policy is
not accepted.
Scope
Not every organisation has to comply with regulatory requirements and also the impact of compli-
ancy varies between companies. Many high-profile Internet firms such as Amazon, Facebook, Flickr,
Google and Netflix serve today as a general reference in terms of process agility, speed of release and
deployment, and customer-orientation. However, it may not be forgotten that their regulatory require-
ments are rather limited and cannot be compared to the banking or medical industry. A number of inter-
views with representatives from companies that do not have regulatory compliance obligations 1, have
clearly shown that the approach of DevOps significantly differs from the approach of companies in heav-
ily regulated industries. Their main priorities are productivity and quality, facilitated by the use of
DevOps and continuous deployment (Savor, 2016). At the same time, regulated companies should take
into account the preservation of their operational licenses, while trying to achieve the same level of
productivity and quality. This difference results in a significant impact on the way in which DevOps, and
Agile in general, is integrated into the organisation.
Regulatory compliance of the organisation describes the efforts the organisation is taking to com-
ply with relevant lows, policies and regulations (Lin, 2016). Due to the increasing complexity and number
of regulations, organisations are often referring to the use of consolidated and approved sets of compli-
ance controls described in compliancy frameworks and standards (Silveira, 2012). This approach allows
to ensure that all the necessary governance requirements are met without duplicate efforts.
Applicable compliancy frameworks vary among countries and industries, with examples such as
PCI-DSS for financial industry, FISMA for U.S. federal agencies, HACCP for food and beverage industry,
Two interviews took place: 1) security consulting company, which serves, among others, cus-
1
tomers in the non-regulated segment that apply DevOps; 2) Belgian department of a global French com-
pany, which is starting to apply Agile software development, in combination with DevOps.
19
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
and HIPAA in healthcare. Other commonly applicable references are COBIT framework and ISO and NIST
standards.
Since it is impossible to cover the complete spectrum of organisations and their compliancy frame-
works in a single study, the scope is limited to large-size European companies (more than 1000 employ-
ees) located in Belgium and the Netherlands. Their approach for addressing the regulatory compliancy
in combination with DevOps is evaluated.
As a reference, providing the set of applicable compliancy controls, two standards are selected for
evaluation within this thesis: ISO/IEC 27001 and NIST.SP.800-53r4. ISO/IEC 27001 is an Information
Security Standard and is a part of ISO/IEC 27000 family of standards. It specifies an Information Security
Management System (ISMS). ISMS is a framework of policies and procedures that includes all legal, phys-
ical and technical controls involved in an organisation’s information risk management processes. The use
of ISO 27001 is widely spread in the European Union. Similar to ISO (International Organisation for
Standardization), also NIST (National Institute of Standards and Technology) define industry-leading ap-
proach to Information Security Management. However, NIST 800-53 is more security control driven,
more technical and less risk focused than ISO 27001. The standard is mainly used in the USA, but also
regularly applied as a reference in European companies, which is facilitated by many synergies between
both standards and predefined mappings of controls. All companies, considered in scope of this study are
relying on ISO/IEC 27001, NIST.SP.800-53r4 or both to implement the ISMS.
Research Question
The main goal of this research is to investigate the impact of DevOps on the compliance with the
ISO/IEC 27001 and NIST.SP.800-53r4 standards in heavily regulated large organisations. This goal will
be achieved by addressing the following research questions:
• Research Question 1: What are the major security compliance controls that are im-
pacted by DevOps adoption?
• Research Question 2: Which “sacrifices” have to be made in DevOps implantation in
order to preserve compliance?
• Research Question 3: How can DevOps assist in assuring compliance?
• Research Question 4: What are the best practices facilitating the implementation of
SecDevOps (i.e. the integration of security into DevOps)?
The deliverable of this research is an SecDevOps Capability Artifact, which consists of the follow-
ing components:
• A list of governance and security control objectives, as defined by ISO/IEC 27001 and
NIST.SP.800-53r4, which are generally impacted when DevOps capabilities are roll-out by
an organisation
• A mapping between respective ISO/IEC 27001 and NIST.SP.800-53r4 control objectives,
from the perspective of DevOps implementation
• DevOps control objectives, with a link to the corresponding ISO and NIST objectives, and
an indication of the impact. The impact of DevOps control objectives on ISO and NIST ob-
jectives may be twofold: 1) it creates an Opportunity to achieve compliance assurance in
a more effective or efficient way; 2) it creates a Risk of sacrificing a security control objec-
tive in favour of flexibility and speed
• A list of (Sec)DevOps controls that have proven to be effective in combining the agility of
the DevOps paradigm with the security compliance assurance.
20
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
According to an online Business Dictionary2, Capability is a measure of the ability of an entity (de-
partment, organization, person, system) to achieve its objectives, especially in relation to its overall mis-
sion. The SecDevOps Capability Artifact defined in this study, will, therefore, increase company’s
(Sec)DevOps capability in the context of regulatory compliance with ISO/IEC 27001 and NIST.SP.800-
53r4 standards.
Research Methodology
The methodology selected to answer the above-mentioned research questions is known in the sci-
entific literature as Design Science Research (DSR) (Hevner A. M., 2004). The primary differentiator of
DSR, compared to other research methods, is that its focus lies with the design and investigation of arti-
facts in a specific context, making use of the existing knowledge base. According to Hevner, the main
purpose of design science research is achieving knowledge and understanding of a problem domain by
building and application of a designed artifact.
DSR consists of three inherent research cycles, as shown in Figure 4 (Hevner A. , 2007):
• The Relevance Cycle bridges the contextual environment of the research project with the
design science activities.
• The Rigor Cycle connects the design science activities with the knowledge base of scientific
foundations, experience, and expertise that informs the research project.
• The central Design Cycle iterates between the core activities of building and evaluating the
design artifacts and processes of the research.
In this thesis a single artifact will be designed, a so-called SecDevOps Capability Artifact. A pro-
cess for designing an artifact is following the approach described by (Johannesson, 2014), which can be
found in Figure 4. Due to the limited timeline foreseen for a Master’s Thesis, this research is limited to
the Problem Space of the design (green rectangle in Figure 5): focussing on the literature field study,
expert interviews and comparative analysis. The remaining steps of the design are left open for further
study.
2 http://www.businessdictionary.com/definition/capability.html
21
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Figure 5. A Design Science Research approach to developing artifact requirements (green square indicates the research space).
The main artifact developed in the scope of this study can be described as follows:
A mapping between ISO/IEC 27001 and NIST.SP.800-53 control objectives and the correspond-
ing DevOps controls objectives & controls. This mapping allows large regulated organisations to design
and implement their DevOps strategy and practices, while finding the optimal balance between the
security compliance and the speed & flexibility of DevOps.
To start the exploration into the main capabilities for an effective & efficient DevOps strategy, in
line with the control objectives of the selected standards, a combination of literature and exploratory
research was used. A literature study is conducted to gather knowledge about the domain of the topic of
interest and knowledge about relevant theories and research methods that can be applied to develop
new knowledge (Recker, 2013). Exploratory research, on the other hand, is used to investigate the prob-
lem which is not clearly defined. It is conducted to have a better understanding of the existing problem,
but will not provide conclusive results. For such a research, a researcher starts with a general idea and
uses this research as a medium to identify issues, that can be the focus for future research. An important
aspect of this approach is that the researcher should be willing to change the research direction subject
to the revelation of new data or insight. Such a research is usually carried out when the problem is at a
preliminary stage. It is often referred to as grounded theory approach or interpretive research as it
used to answer questions like what, why and how.
Grounded Theory (Recker, 2013) is a type of qualitative research that relies on inductive genera-
tion (building) of theory based on (‘grounded in’) qualitative data systematically collected and analysed
about a phenomenon, such as existing DevOps implementations within relevant organisations. The
grounded theory approach essentially attempts to explore for, and develop, generalized formulations
about the basic features of a phenomenon while simultaneously grounding the account in empirical ob-
servations or data. One of the key advantages – and challenges – of the grounded theory approach is that
it is applicable to research domains that are new or emergent and may yet lack substantive theory.
22
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• The process of theory building is highly iterative, during which theory and data are con-
stantly compared, so-called constant comparative analysis. This step is achieved by de-
riving theoretical statements from the conducted literature study and comparing them
against the practical examples and knowledge from the field via expert interviews.
• The theory is built upon theoretical sampling as a process of data collection and analysis,
which is driven by concepts that emerge from the study and appear to be of relevance to
the nascent theory. In the case of this research, the results collected from the previous step
are validated through additional literature research, to confirm the established theory.
Theoretical Foundation
Software is nowadays a critical asset of almost every organisation, but producing great applications and
services within a competitive timeline requires modern development and delivery processes.
Fundamental to this challenge is building trust, managing risk and exceeding the expectations of the cus-
tomers for security and privacy with the business, online, via apps and in the data centres. It is becoming
imperative to weave security into every step of the development process, from design, through coding,
to release and operation, which is supported by the concepts such as SecDevOps. Research by (CA Tech-
nologies, 2014) confirmed that organisations which see effective security as an enabler of increased busi-
ness performance significantly over-perform their mainstream peers. This manifests itself in the form of
superior metrics and outcomes in relation to software delivery. It’s probably also no coincidence that
these security-minded organisations are seeing 40 percent higher revenue growth and 50 percent higher
profit growth than their mainstream contemporaries.
As was demonstrated in the previous sections, many research efforts are dedicated to determining the
right set of security measures in the context of the Agile development. These measures are not identical
for each type of organisation and depend on varying factors, of which the strategy is the most important
one (Stackpole, 2010). Stockpole points to the fact that many organisations try to shortcut the analysis
phase and end up failing to include business drivers, business unit direction or big-picture input into their
planning cycles. When not enough time has been spent gathering big-picture, the likelihood increases
23
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
that the organisation will be more reactive to the environment than proactively helping shape the envi-
ronment. In marketing jargon, this would be called market-shaping activities instead of market-reacting
activities. Market-shaping activities involve the identification of the drivers shaping demand, a survey of
what existing products and services might be to meet that demand, which in turn helps identify gaps in
the market and the development of a strategy for market-shaping activities. A similar approach can be
used to plan a proactive security strategy. First, gather the information needed to identify the issues
affecting organisational security (now and into the future), and then compare existing and future require-
ments to your current capabilities to identify gaps in security functionality. Next, build a strategic plan to
fill those gaps. Figure 6Error! Reference source not found. charts some of the basic domains within an
enterprise that a security group must consider as it develops strategy.
A typical security strategy is a plan to mitigate risks while complying with legal, statutory, con-
tractual, and internally developed requirements. But a security strategy resides inside an organisational
strategy that may have very different drivers than a security strategic plan. In order for the two strategic
plans to align and work well together, there must be a clear understanding of both plans and clear links
between them. When designing the enterprise-wide, as well as the security strategy, the following input
should be considered (Stackpole, 2010):
• Environmental scan (e.g. industry & competitor analysis, marketing research, technology trends,
etc.)
• Regulations & legal environment
• Industry standards
• Customer base
• Organisational Culture
• Business Drivers
Defining the strategy means understanding the strategic risks, both internal and external. Security
risks have a number of touchpoints with general risk management frameworks (see Figure 7), as outlined
in CFO Forum Practitioner’s Guide (CFO Forum):
• Strategy definition: assures alignment between the organisation’s overall aspirations and the
high-level security plan; identifies any associated risks or uncertainty that may arise; defines risk
capacity & appetite.
• Strategy implementation: as with any other business or functional area, a set of objectives and
plans to support the security strategy needs to be defined; standard risk management processes,
including a detailed risk assessment, need to be implemented, followed by an appropriate risk
response (treat; tolerate; terminate; transfer), establishment of a risk appetite and key risk indi-
cators.
• Strategy monitoring: the strategy implementation progress should be monitored, together with
the key risks and escalation of any breaches of risk appetite.
In the ideal world after the Strategy, Risks and the corresponding Threats are known, it would be
possible to answer the question: “Which mechanism and controls shall be applied within Agile Software
Development & Operations to make sure a software system is completely secure?”. In reality, it is impos-
sible to answer this question since the definition of completely secure means that all the possible threats
of a software system are identified, which is impossible to guarantee. However, if an Information Secu-
rity Architecture, Frameworks and Standards are used, an assurance can be provided that security
has been sufficiently implemented, taking into account all the known threats (Derksen, October 2018).
24
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Software products can be extremely diverse and threats to a software system depend on numerous
criteria: type of software, how it is built, the environment in which it is used and how it is used. Once the
threats are defined, it becomes possible to verify if the software system is sufficiently protected against
these threats. This protection is realized by a secure design, by implementing mitigation measures, by
coding securely, security testing, etc. The use of Information Security Architecture and corresponding
Frameworks and Standards aims to provide a secure software development process, by embedding them
into secure organisational cultures, systems and behaviour, which are in line with the organisation’s
strategy. A more formal definition of Information Security Architecture is provided by Vael (Vael, 2019):
The practice of applying a comprehensive & rigorous method describing current and/or future
structure & behaviour for an organisation's security processes, information security systems, personnel &
organisational sub-units, so they align with the organisation's core goals & strategic direction.
COBIT® 5 (ISACA)
25
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Advantages Disadvantages
The NIST CSF is recognized by many as a resource to help improve the security op-
erations and governance for public and private organisations. It provides a guideline for
transforming the organisational security posture and risk management from a reactive to
proactive approach.
The framework is organized into five core Functions also known as the Framework
Core. The functions are organized concurrently with one another to represent a security
lifecycle. Each function is essential to a well-operating security posture and successful
management of cybersecurity risk. Definitions for each Function are as follows:
Advantages Disadvantages
• Available for free from the NIST • Heavily US-focused supporting doc-
website are detailed checklists, umentation
mathematical formulas and other • Prescriptive and not easy to adopt
materials • Finding appropriate support to ef-
• Frequently assessed and restruc- fectively implement the methodol-
tured as new technologies/ regula- ogy may be difficult outside the US
tions arisen
• May be used in conjunction with
other NIST frameworks from NIST
and/or other parties
26
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
This Special Publication is published by the National Institute of Standards and Tech-
nology (NIST). NIST is responsible for developing information security standards and
guidelines, including minimum requirements for federal information systems. This publi-
cation provides a catalogue of security and privacy controls for federal information sys-
tems and organisations and a process for selecting controls to protect organisational op-
erations (including mission, functions, image, and reputation), organisational assets, indi-
viduals, other organisations, and the Nation from a diverse set of threats including hostile
cyber-attacks, natural disasters, structural failures, and human errors. The controls are
customizable and implemented as part of an organisation-wide process that manages in-
formation security and privacy risk. The controls address a diverse set of security and pri-
vacy requirements across the federal government and critical infrastructure, derived from
legislation, Executive Orders, policies, directives, regulations, standards, and/or mis-
sion/business needs.
The publication also describes how to develop specialized sets of controls, or over-
lays, tailored for specific types of missions/business functions, technologies, or environ-
ments of operation. Finally, the catalogue of security controls addresses security from both
a functionality perspective (the strength of security functions and mechanisms provided)
and an assurance perspective (the measures of confidence in the implemented security
capability). Addressing both security functionality and security assurance ensures that in-
formation technology products and the information systems built from those products us-
ing sound systems and security engineering principles are sufficiently trustworthy.
The goal of the publication is to achieve more secure information systems and effec-
tive risk management within the US federal government by:
• Providing a stable, yet flexible catalogue of security controls to meet current infor-
mation protection needs and the demands of future protection needs based on changing
threats, requirements, and technologies;
Advantages Disadvantages
• Available for free from the NIST • Heavily US-focused supporting doc-
website umentation
• Prescriptive and not easy to adopt
27
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
An ISMS such as that specified in ISO/IEC 27001 takes a holistic, coordinated view
of the organisation’s information security risks in order to implement a comprehensive
suite of information security controls under the overall framework of a coherent manage-
ment system. Many information systems have not been designed to be secure in the sense
of ISO/IEC 2700 and this standard. The security that can be achieved through technical
means is limited and should be supported by appropriate management and procedures.
Identifying which controls should be in place requires careful planning and attention to
detail. A successful ISMS requires support by all employees in the organisation. It can also
require participation from shareholders, suppliers and other external parties. This stand-
ard helps to define the requirements for the different participants and stakeholders.
Advantages Disadvantages
28
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
The examples above are just some of the many available frameworks. For a more complete list of
cybersecurity & privacy related statutory, regulatory and industry frameworks see Annex A. Overview of
Major Cybersecurity & privacy-related frameworks. There are many others generic or specific ones, such
as PCI DSS, CSC / TOP 20, SCF (Secure Controls Framework), etc. However, the selection made gives an
overview of high-level versus more detailed widely-used frameworks and standards.
It is important to mention that the selection of a cybersecurity framework is in the first place a
business decision and less a technical decision. The choice must be driven by a fundamental understand-
ing of what the organisation needs to comply with from a statutory, regulatory and contractual perspec-
tive, since that understanding establishes the minimum set of requirements necessary to (1) not be con-
sidered negligent with reasonable expectations for security & privacy; (2) comply with applicable laws,
regulations and contracts; and (3) implement the proper controls to secure systems, applications and
processes from reasonable threats (Compliance Forge Website, sd).
For the purpose of this study, the four considered frameworks / standards (COBIT 5, NIST Cyber-
security Framework, NIST SP 800-53 and ISO 27002) are reviewed from the point of view of the content
each one is offering. The content had to provide sufficient security and privacy controls “out of the box”
in order to avoid the need for further detailing and interpreting while mapping the DevOps controls. From
that prospective, COBIT and NIST Cybersecurity Framework lacked the sufficient coverage to be consid-
ered a comprehensive cybersecurity framework, giving the preference to NIST SP 800-53 and ISO 27002.
These considerations are extensively discussed with the promotor of this thesis, prof. Yuri Bobbert, lead-
ing to the decision of limiting this study to the two above-mentioned and widely-used standards, NIST SP
800-53 and ISO 27002. While ISO 27002 is a subset of NIST SP 800-53, where fourteen ISO 27002 sec-
tions of security controls fit within twenty-six families of NIST SP 800-53, both are widely-used and suit-
able for large regulated companies. Therefore, both standards are considered for comparison and map-
ping with (Sec)DevOps controls by this research.
Literature Review
As mentioned in the Research Methodology chapter, this study is relying upon the methodology
that is making use of the extensive literature study in order to provide a solid scientific basis.
Before initiating this study, the following open questions related to academic literature review
are stated:
Within this research we review current (Sec)DevOps and CI/CD practices and determine their con-
trol objectives that either pose a risk or introduce an opportunity to the compliance. For each control
objectives controls are determined, which facilitate the achievement of the compliance with ISO 27002
and NIST SP 800-53.
29
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
To answer the first question, this research refers to the parallel study currently ongoing within
Antwerp Management School: “A framework for continuous security & security velocity metering in
DevOps”. This study reviews, among others, the recent academic literature to determine the most rele-
vant Agile and DevOps practices applicable today. The selected list of practices serves as a basis for the
investigation of the appropriate controls. This investigation brings us to the second question, which pe-
riodically appears in the recent academic literature but is not fully developed and answered by the cur-
rent research efforts.
To answer the second question, literature review is conducted using the Google Scholar web search
engine for academic literature, which indexes peer-reviewed academic journals, books, conference pa-
pers, thesis, etc. Since DevOps and CI/CD are rapidly evolving domains, the search was limited to recent
publications, going back a maximum of 5 years, i.e. from 2014 to 2019.
The following search strings were used as an input in Google Scholar: “DevOps compliance”,
“DevSecOps compliance”, “SecDevOps compliance”, “Continuous Delivery compliance”, “Continuous De-
ployment compliance”, “Continuous integration compliance”, “ISO 27001 DevOps”, “ISO 27002 DevOps”,
“ISO 27001 Agile”, “ISO 27002 Agile”, “NIST SP DevOps”. The search was performed on the Title and the
Content of the publications. Only publications written in English were considered.
The first search strings provided a large number of potentially relevant results. The further the search
progressed, the larger was the number of duplicate publications found, indicating the exhaustion of the
search space. The total number of potentially relevant publications, based on their title and abstract,
across all search strings amounts to 88 publications, of which 36 were maintained as being relevant for
this research after thorough abstract review. Furthermore, the text of the 36 selected publication was
examined in detail, including the forward search on the recent references. The forward search provided
13 additional relevant publications that were used as the foundation of this study.
30
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
31
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
The control objectives and controls of ISO/IEC 27001 and NIST SP 800-53 impacted by the intro-
duction of (Sec)DevOps were determined as follows:
• Step 1: security control objectives and the corresponding controls are listed, which are
mentioned by the scientific literature studies (see Literature Review) as relevant within
this context
• Step 2: the results of Step 1 are reviewed by domain experts during the following individual
interviews (see Subject matter expert interviews) and the original input is modified based
on their experience and recommendations
Below is an overview of the outcome. Taking into the account the widespread use of ISO/IEC 27001
within the European Union, this standard is used as the core for structuring the SecDevOps Capability
Artifact. However, it is extended with more extensive related controls of NIST SP 800-53. The mapping
between the two standards is following Appendix H of SP 800-53 Revision 4. However, it is important to
mention that the reference mapping is very exhaustive and the sections that have no direct link to
(Sec)DevOps capabilities are left out, following the same logic as described above.
32
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• Conflicting duties and areas of responsibility should • Separation of duties includes, for example: (i) di-
be segregated to reduce opportunities for unau- viding mission functions and information system
thorized or unintentional modification or misuse of support functions among different individuals
the organisation’s assets and/or roles; (ii) conducting information system
• Care should be taken that no single person can ac- support functions with different individuals (e.g.,
cess, modify or use assets without authorization or system management, programming, configuration
detection management, quality assurance and testing, and
• The initiation of an event should be separated from network security); and (iii) ensuring security per-
its authorization. The possibility of collusion should sonnel administering access control functions do
be considered in designing the controls not also administer audit functions
6.1.5 Information security in project management SA-3 SYSTEM DEVELOPMENT LIFE CYCLE
• Information security should be integrated into the • Manages the information system using [Assign-
organisation’s project management method(s) to ment: organisation-defined system development
ensure that information security risks are identified life cycle] that incorporates information security
and addressed as part of a project considerations
33
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• The project management methods in use should re- • Defines and documents information security roles
quire that: a) information security objectives are in- and responsibilities throughout the system devel-
cluded in project objectives; b) an information secu- opment life cycle
rity risk assessment is conducted at an early stage • Identifies individuals having information security
of the project to identify necessary controls; c) in- roles and responsibilities
formation security is part of all phases of the ap- • Integrates the organisational information security
plied project methodology risk management process into system develop-
• Responsibilities for information security should be ment life cycle activities
defined and allocated to specified roles defined in • Security awareness and training programs can
the project management methods help ensure that individuals having key security
roles and responsibilities have the appropriate ex-
perience, skills, and expertise to conduct assigned
system development life cycle activities
• The effective integration of security requirements
into enterprise architecture also helps to ensure
that important security considerations are ad-
dressed early in the system development life cycle
and that those considerations are directly related
to the organisational mission/business processes
7.2 During employment 7.2.2 Information security awareness, education AT-2 SECURITY AWARENESS TRAINING
and training AT-3 ROLE-BASED SECURITY TRAINING
To ensure that employees PM-13 INFORMATION SECURITY WORKFORCE
and contractors are aware • All employees of the organisation and, where rele-
of and fulfil their infor- vant, contractors should receive appropriate peri- • The organisation provides basic security aware-
mation security responsi- odic awareness education and training and regular ness training to information system users (includ-
bilities updates in organisational policies and procedures, ing managers, senior executives, and contractors):
as relevant for their job function a. As part of initial training for new users; b. When
required by information system changes; and c.
[Assignment: organisation-defined frequency]
thereafter
• The organisation provides role-based security
training to personnel with assigned security roles
and responsibilities: a. Before authorizing access
to the information system or performing assigned
duties; b. When required by information system
34
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
35
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
9.4.5 Access control to program source code CM-5 ACCESS RESTRICTIONS FOR CHANGE
• Access to program source code should be restricted. • The organisation employs an audited override of
For program source code, this can be achieved by automated access control mechanisms under [As-
controlled central storage of such code signment: organisation-defined conditions]
• An audit log should be maintained of all accesses to • Organisations maintain records of access to en-
program source libraries sure that configuration change control is imple-
• Maintenance and copying of program source librar- mented and to support after-the-fact actions
ies should be subject to strict change control proce- should organisations discover any unauthorized
dures changes. Access restrictions for change also in-
• If the program source code is intended to be pub- clude software libraries
lished, additional controls to help getting assurance •The information system prevents the installation
on its integrity (e.g. digital signature) should be con-
of [Assignment: organisation-defined software
sidered and firmware components] without verification
that the component has been digitally signed us-
ing a certificate that is recognized and approved
by the organisation
• The organisation enforces dual authorization for
implementing changes to [Assignment: organisa-
tion-defined information system components and
system-level information]
• The organisation limits privileges to change soft-
ware resident within software libraries
• The organisation: (a) Limits privileges to change
information system components and system-re-
lated information within a production or opera-
tional environment; and (b) Reviews and re-eval-
uates privileges [Assignment: organisation-de-
fined frequency]
12.1 Operational proce- 12.1.2 Change management CM-3 CONFIGURATION CHANGE CONTROL
dures and responsibilities CM-5 ACCESS RESTRICTIONS FOR CHANGE
• Changes to the organisation, business processes, in- SA-10 DEVELOPER CONFIGURATION MANAGE-
formation processing facilities and systems that af- MENT
fect information security should be controlled
36
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
To ensure correct and se- • The following items should be considered: identifi- • Configuration change control includes changes to
cure operations of infor- cation and recording of significant changes; plan- baseline configurations for components and con-
mation processing facili- ning and testing of changes; assessment of the po- figuration items of information systems, changes
ties tential impacts, including information security im- to configuration settings for information technol-
pacts, of such changes; formal approval procedure ogy products (e.g., operating systems, applica-
for proposed changes; verification that information tions, firewalls, routers, and mobile devices), un-
security requirements have been met; fallback pro- scheduled/unauthorized changes, and changes to
cedures, including procedures and responsibilities remediate vulnerabilities
for aborting and recovering from unsuccessful • Typical processes for managing configuration
changes and unforeseen events; provision of an changes to information systems include, for exam-
emergency change process to enable quick and con- ple, Configuration Control Boards that approve
trolled implementation of changes needed to re- proposed changes to systems
solve an incident • Auditing of changes includes activities before and
• Formal management responsibilities and proce- after changes are made to organisational infor-
dures should be in place to ensure satisfactory con- mation systems and the auditing activities re-
trol of all changes quired to implement such changes
• The organisation employs automated mecha-
nisms to: (a) Document proposed changes to the
information system; (b) Notify [Assignment: orga-
nized-defined approval authorities] of proposed
changes to the information system and request
change approval; (c) Highlight proposed changes
to the information system that have not been ap-
proved or disapproved by [Assignment: organisa-
tion-defined time period]; (d) Prohibit changes to
the information system until designated approv-
als are received; (e) Document all changes to the
information system; and (f) Notify [Assignment:
organisation-defined personnel] when approved
changes to the information system are completed
• The organisation employs automated mecha-
nisms to implement changes to the current infor-
mation system baseline and deploys the updated
baseline across the installed base
37
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
38
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• The use of resources should be monitored, tuned • The organisation allocates audit record storage
and projections made of future capacity require- capacity in accordance with [Assignment: organi-
ments to ensure the required system performance sation-defined audit record storage require-
• Detective controls should be put in place to indicate ments]
problems in due time
12.4 Logging and monitor- 12.4.1 Event logging AU-3 CONTENT OF AUDIT RECORDS
ing AU-6 AUDIT REVIEW, ANALYSIS, AND REPORTING
• Event logs recording user activities, exceptions, AU-12 AUDIT GENERATION
To record events and gen- faults and information security events should be
erate evidence produced, kept and regularly reviewed • The information system generates audit records
• Event logging sets the foundation for automated containing information that establishes what type
monitoring systems which are capable of generat- of event occurred, when the event occurred,
ing consolidated reports and alerts on system secu- where the event occurred, the source of the event,
rity the outcome of the event, and the identity of any
individuals or subjects associated with the event
39
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
40
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
secure coding guidelines for each programming lan- metrics [Selection (one or more): [Assignment: or-
guage used; security requirements in the design ganisation-defined frequency]; [Assignment: or-
phase; security checkpoints within the project mile- ganisation-defined program review milestones];
stones; secure repositories; security in the version upon delivery]
control; required application security knowledge; • The organisation requires the developer of the in-
developers’ capability of avoiding, finding and fix- formation system, system component, or infor-
ing vulnerabilities mation system service to select and employ a se-
• Developers should be trained in their use and test- curity tracking tool for use during the develop-
ing and code review should verify their use. If devel- ment process
opment is outsourced, the organisation should ob- • The organisation requires that developers per-
tain assurance that the external party complies with form threat modeling and a vulnerability analysis
these rules for secure development for the information system at [Assignment: organ-
isation-defined breadth/depth]
• The organisation requires the developer of the in-
formation system, system component, or infor-
mation system service to reduce attack surfaces to
[Assignment: organisation-defined thresholds]
• The organisation requires the developer of the in-
formation system, system component, or infor-
mation system service to implement an explicit
process to continuously improve the development
process
• The organisation requires the developer of the in-
formation system or system component to archive
the system or component to be released or deliv-
ered together with the corresponding evidence
supporting the final security review
• The organisation requires the developer of the in-
formation system, system component, or infor-
mation system service to: (a) Produce, as an inte-
gral part of the development process, a formal pol-
icy model describing the [Assignment: organisa-
tion-defined elements of organisational security
policy] to be enforced; and (b) Prove that the for-
41
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
42
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
43
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
44
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
45
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
To ensure that information se- • Managers should regularly review the compliance • The organisation: a. Develops, documents, and
curity is implemented and op- of information processing and procedures within disseminates to [Assignment: organisation-de-
erated in accordance with the their area of responsibility with the appropriate se- fined personnel or roles]: 1. A configuration man-
organisational policies and curity policies, standards and any other security re- agement policy that addresses purpose, scope,
procedures quirements roles, responsibilities, management commitment,
• Automatic measurement and reporting tools should coordination among organisational entities, and
be considered for efficient regular review compliance; and 2. Procedures to facilitate the im-
• Results of reviews and corrective actions carried plementation of the configuration management
out by managers should be recorded and these rec- policy and associated configuration management
ords should be maintained controls; and b. Reviews and updates the current:
• Information systems should be regularly reviewed 1. Configuration management policy [Assignment:
for compliance with the organisation’s information organisation-defined frequency]; and 2. Configu-
security policies and standards ration management procedures [Assignment: or-
• Technical compliance should be reviewed prefera- ganisation-defined frequency]
bly with the assistance of automated tools, which • The organisation includes as part of security con-
generate technical reports for subsequent interpre- trol assessments, [Assignment: organisation de-
tation by a technical specialist. Alternatively, man- fined frequency], [Selection: announced; unan-
ual reviews (supported by appropriate software nounced], [Selection (one or more): in-depth mon-
tools, if necessary) by an experienced system engi- itoring; vulnerability scanning; malicious user
neer could be performed testing; insider threat assessment; perfor-
• If penetration tests or vulnerability assessments are mance/load testing; [Assignment: organisation-
used, caution should be exercised as such activities defined other forms of security assessment]]
could lead to a compromise of the security of the
system. Such tests should be planned, documented
and repeatable
• Any technical compliance review should only be
carried out by competent, authorized persons or
under the supervision of such persons
46
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
47
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
48
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Specification of SecDevOps
Capability Artifact
Review of artifact by
subject matter experts
The (Sec)DevOps controls described in what follows are the result of the extensive scientific liter-
ature study (see Literature Review) and the review / suggestions by subject domain expert (see Subject
matter expert interviews). They are specifically focussing on large regulated organisations, since these
organisations are primarily dealing with compliancy issues and often follow the above-mentioned stand-
ards rigorously. The proposed controls provide a guidance on either how to reduce the Security Risk
without significantly sacrificing the (Sec)DevOps speed gain, or how to optimally use (Sec)DevOps capac-
ities to improve the Security Compliance.
49
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
ISO/IEC 27001 controls NIST SP 800-53 controls (Sec)DevOps control (Sec)DevOps controls
impacted by (Sec)DevOps impacted by (Sec)DevOps objectives
6.1.1. Information secu- CM-1 CONFIGURATION Opportunity. Introduce • Part of the tasks from the Security department
rity roles and responsi- MANAGEMENT POLICY AND new security-oriented should be moved to the SecDevOps teams. In order
bilities PROCEDURES roles & responsibilities to achieve fast security feedback in each step of the de-
CM-9 CONFIGURATION (Rindell, 2016) velopment process, as opposed to the late reviews at
MANAGEMENT PLAN fixed security gates, the Security department should
steer the activities in a different way. Part of the secu-
rity tasks should be executed by SecDevOps teams.
• Define the role of the Security Champion within the
agile organisation. This role combines the responsi-
bility of ensuring the appropriate Security Gating with
support for testing and security automation within the
agile team. The Security Champion himself is sup-
ported by a Cyber Defence team for the execution of his
tasks. It creates scalability within Cyber Defence
teams, since it is not feasible to foresee a dedicated
Cyber Defence professional within each team/squad.
• Work according to “5 amigo’s” principle, stimulat-
ing collaboration between Business, Development,
Testing, Security & Operations. In fact, instead of the
traditional “3 amigo’s” Agile principle, where the work
is examined from 3 different perspectives (Business,
Development & Testing), we need to start speaking in
terms of “5 amigo’s”, adding Security and Operations
into the list.
• Security should be positioned at the front of the de-
velopment pipeline. Often Security does not have
enough bandwidth and becomes reactive. Therefore, a
merge between Security and Enterprise Architecture is
a must. Security should steer the design and implemen-
tation process using SABSA principles: test planning,
certificate management, judge & advocate principle,
segregation of duties, etc. The role of Security as the
police agent should be replaced by the effective design
of secure architectures.
50
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
6.1.2 Segregation of du- AC-5 SEPARATION OF DU- Opportunity. Infor- • Automate the production deployment process so
ties TIES mation sharing between no person can execute the deployment without
Development, Operations passing the automated controls first. To reduce op-
and Testing teams portunities for unauthorized or unintentional modifi-
(McCarthy M. H., 2015) cation or misuse of the organisation’s assets it is sug-
Risk. Allow developers to gested to automate the production deployment pro-
make decisions inde- cess. The same procedure should be used when de-
pendently (Savor, 2016) ploying to non-production environments.
Risk. Grant team mem- • Foresee full governance for sensitive phases in
bers complete access to software development and deployment. For sensi-
production in order to tive phases such as production deployment, full gov-
fulfil operational tasks ernance may be required. To support these cases, sys-
(Plant, 2019) tems should prevent mistakes by allowing only the
Risk. All team members permitted users to approve the execution of these
must be able to know, un- phases.
derstand, and modify the • Code should be peer-reviewed. In order to ensure
source code (Pastrana, that no single person has end-to end control of a pro-
2019) cess without a separate check point, code that is
Risk. Provide developers checked-in should always be peer-reviewed. The same
with the freedom and pos- can be achieved by encouraging developers to create
sibility to commit changes merge requests and assigning those to a more senior
to production (Shahin M. developer who has to check the code and merge it. The
B., 2017) reviewed code should be signed with personal crypto-
graphic signatures of the developer and the reviewer.
51
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
6.1.5 Information secu- SA-3 SYSTEM DEVELOP- Opportunity. Use collab- • Well-defined policies regarding information ex-
rity in project manage- MENT LIFE CYCLE oration among teams to change across teams should be in place to prevent
ment SA-15 DEVELOPMENT PRO- collect data about the pro- security threats due to collaboration. For instance,
14.2.1 Secure develop- CESS, STANDARDS, AND cess to identify weak- SecDevOps may facilitate data sharing and knowledge
ment policy TOOLS nesses, assess the perfor- exchange between teams if sharing is happening in a
SA-17 DEVELOPER SECU- mance of the teams, and safe way and all data is stored in a shared secure plat-
RITY ARCHITECTURE AND check compliance with form, such as O365.
DESIGN standards (Vaishnavi, • Risk assessments should be performed from the
2016) first planning stage and continuously before every
Opportunity. Develop ex- iteration. It is important as a way to prioritize risks,
pertise and processes to examine controls already in place and decide which are
best discover, protect needed for going forward.
52
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
53
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
54
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
55
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
7.2.2 Information secu- AT-2 SECURITY AWARENESS Opportunity. Foresee se- • Train Security Champion. Security Champion plays a
rity awareness, education TRAINING curity training for devel- central role in the successful implementation of
and training AT-3 ROLE-BASED SECURITY opment team members SecDevOps and the question of training the right skills
TRAINING (Rahman, 2016) of the Security Champion becomes prominent. Specific,
PM-13 INFORMATION SECU- detailed, training is required for this role, including a
RITY WORKFORCE “Survival Kit”.
• Train Security Team. Security team can be a police
defining the rules or an advisor. The first way of work-
ing is against the agile principles and is not scalable.
For instance, static code scanning should be a part of
the work of an Agile team, a part of the tasks of a Secu-
rity Champions. For the second way of working Secu-
rity team needs new competences, in order to give ad-
vice on specific topics, such as coding guidelines.
• Train software developers. Technical training is re-
quired for developers, to make sure they are aware of
the best practices for the secure software develop-
ment. Security people often do not understand devel-
opment specifics. Visa versa, developers expect to re-
ceive a clear TO-DO list, while only translating OWASP
will have 260 items, ISO 110 items, PCI 400 items, etc.
So you need to create a real team with different pro-
files. For example, Agile Skills Framework can be used,
which advises to create COP (Community Of Practice).
Before giving full control to developers as a security
team, they need to be aware of the consequences. They
also should receive a hardened operational image for
deployment, without errors. Developers should not be
allowed to modify this image any more, but if they do
the security team should be immediately informed via
automatic monitoring and alerting.
9.2.3 Management of AC-6 LEAST PRIVILEGE Risk. Search the right bal- • Any privileged accounts (such as root and the local
privileged access rights AC-3 ACCESS ENFORCEMENT ance between security and administrator accounts) should be monitored very
9.4.4 Use of privileged CM-5 ACCESS RESTRICTIONS compliance, and opera- closely (or ideally be disabled completely). For in-
utility programs FOR CHANGE stance, a version control system can be adopted as a
central repository that store the historical changes
56
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
9.4.5 Access control to tional flexibility for devel- made to the code. Appropriate individuals must be no-
program source code opers and testers (Mich- tified of the event and associated report for review. All
ener, 2016) event reports must be stored securely and made avail-
Risk. Secure and manage able to internal or external reviewers and auditors, as
access control to CI pipe- appropriate.
lines (Hilton, 2017) • Use timed access rights by allowing developers to
Opportunity. Infor- request timed passwords. It grants developers re-
mation security should be stricted access, for example, to the production environ-
considered by all develop- ment to perform a change, once it is authorized. The
ers (Hilton, 2017) access is logged and a standard notification is sent to
the security department. Tooling may help to simplify
the task: RepoMan is a tool that uses activity history to
reduce account privileges over a certain period of time.
While this control seems like an effective compromise,
it depends on the circumstances whether this is prac-
tical. When developers need access to the production
twice per week, requesting timed passwords for this
frequent access would be rather impractical and not
add much security since there will be a regular oppor-
tunity to tamper with the production environment.
• Grant employees dynamic access rights per sprint
by assigning them specific responsibilities and ac-
cess rights every time.
• It is often sufficient to give DevOps engineers two
accounts for the different environments. Develop-
ers often need administrator rights in the development
environment and restricted user level rights in the pro-
duction environment. Do not give to developers default
privileges to access configuration tools in the pipeline.
Only specific roles should be granted this access.
• Maintain a separate CI server for development
done outside the company. Maintain an internal CI
server that operates behind a company firewall, as well
as an external one. Internal one cannot be exposed due
to confidentiality requirements, but external CI may be
57
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
58
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
59
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
60
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
12.1.3 Capacity manage- AU-4 AUDIT STORAGE CAPAC- Opportunity. Teams • Operations tools should provide key insights into
ment ITY must have visibility into the performance of business transactions before
capacity in order to under- the system goes live. This enables development
stand the impact on infra- teams to quickly understand infrastructure dependen-
structure when there are cies, how functional changes impact business perfor-
increases in workload, mance, and where refactoring is needed.
transactions, application • Maintain a dynamic infrastructure environment. In
upgrades, and hardware SecDevOps provisioning of new servers is facilitated by
refresh (McCarthy M. H., cloud capacity management. In a static environment
2015) the concept of SecDevOps is difficult to implement.
Everything in the cloud is elastic and can upsize or
downsize. You should not have too much capacity as
well, since it comes with a significant cost. The environ-
ment should be periodically reviewed from the point of
view of capacity: optimization versus service. You
should also monitor that upscaling and downscaling
does not happen too often.
12.4.1 Event logging AU-3 CONTENT OF AUDIT REC- Opportunity. Tools • Integrate process monitoring tools into the deploy-
ORDS should not only monitor ment pipeline. It allows to minimize risk and create
AU-6 AUDIT REVIEW, ANALY- and automatically report reliable reporting which can be used by auditors. In
SIS, AND REPORTING incidents such as compli- case of a problem or if compliance conditions are not
AU-12 AUDIT GENERATION ance breaches but should fulfilled, these tools can halt the deployment process
CA-7 CONTINUOUS MONITOR- also continuously perform and alert the developers. Teams need to embrace ac-
ING logging to create traceable tive monitoring methods to build an understanding
SI-4 INFORMATION SYSTEM processes and valid audit about issues before they affect customers. Part of this
MONITORING trails (Plant, 2019) involves finding better ways to remove misleading
Opportunity. Use logs to alarms and false-positives.
share technical infor- • Define monitoring metrics. Mean time to repair
mation within an organi- (MTTR) and mean time to restore service (MTRS) are
sation (Aljundi, 2018) more important to track than the mean time between
61
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Opportunity. Use new failures (MTBF) in most cases because DevOps teams
logging & alerting tech- should focus on learning and moving on from mistakes
niques to better predict instead of not making any. Other useful metrics are
application performance overall process quality, cost of development, cost of
and usage (Ravichandran, maintenance, accessibility, reliability, interoperability,
2016) and availability for audits.
Opportunity. Tools • An effective, dynamic inventory must quickly and
should provide teams fast continuously discover and validate new assets, or
feedback on software code changes in existing assets, as soon as they appear
effectiveness and problem online.
components (Ravichan- • Have a log-driven, log-specific architecture in the
dran, 2016) middle of the applications. Software architectures of
Opportunity. Make logs those applications should continuously collect opera-
readable to all stakehold- tional data at the appropriate levels and more im-
ers (Shahin M. B., 2016) portantly make it easy to aggregate logs, convert them
Opportunity. Foresee into appropriate format and make them searchable.
monitoring infrastructure • Use quality assurance monitoring to check if the
to quickly identify newly- code is following the defined standard. QA monitor-
deployed software that is ing can be automated to reduce resources consump-
misbehaving (Savor, tion and faster defects detection. Encourage DevOps
2016) teams to monitor because they also have to provide op-
Opportunity. Use strong erational services for their applications. Without the
monitoring as a mean to feedback they receive from monitoring these systems
compensate for a potential it is more difficult for them to detect problems.
lack of preventive controls • Use tooling to process logs. There are many different
(Plant, 2019) logging, which can be created in the cause of the years.
For example, access logging is becoming very im-
portant for cybersecurity, but this logging generates a
huge amount of data. New tooling, like Splunk, help to
process that data. It allows you to create intelligence
over the whole pile of loggings in the technology stack:
pattern recognition (user login, network access, etc.),
exceptional situation detection, etc.
12.6.2 Restrictions on CM-11 USER-INSTALLED SOFT- Risk. Delegate tool choice • In a contained environment the tooling choice can
software installation WARE to teams to contributes to be delegated to the development team to improve
62
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
13.1.3 Segregation in net- AC-4 INFORMATION FLOW EN- Opportunity. Implement • Use micro-segmentation for network isolation and
works FORCEMENT network isolation and pol- policy control in the cloud environment. With this
icy control (Shackleford, technique, each cloud instance adopts a “zero trust”
D., SANS, 2016) policy model that allows for very granular network in-
teraction controlled at the virtual machine network in-
terface controller. This allows each cloud system to es-
sentially take its network access control and interac-
tion policy with it as it migrates through virtual and
cloud environments, minimizing disruption and reli-
ance on physical and hypervisor-based network tech-
nology, although software-defined networking and au-
tomation techniques can definitely play a role in micro
segmentation operations.
63
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
14.2.7 Outsourced devel- SA-4 ACQUISITION PROCESS Opportunity. Teams • Foresee service level agreements with hosting
opment SA-12 SUPPLY CHAIN PROTEC- must have the visibility providers that include security.
TION and insight you need to • With off-shore development, the 3rd party needs to
ensure reliable business be on the same platform, using the same tooling.
service delivery for the ap- Just passing the code does not work. There should be
plications and business time alignment as well, since there are more time de-
services that have moved pendencies. We should also be aware of vulnerabilities
to third-party providers introduced by the third party.
(McCarthy M. H., 2015) • With cloud, we rely on the configuration of the sup-
plier, who provides the assurance. You have an au-
ditor who controls the cloud provider: compliance &
assurance. We are working with the cascading control
frameworks: the more you reuse from cloud, the less
controls you need to implement yourself. And cloud
provider guarantees that all controls are correctly im-
plemented.
18.2.2 Compliance with CM-1 CONFIGURATION MAN- Opportunity. Trace auto- • Integrate into SecDevOps the following set of best
security policies and AGEMENT POLICY AND PRO- matically compliance practices: automating tests to detect noncompli-
standards CEDURES breaches and verify stand- ance, tracking compliance breaches through auto-
18.2.3 Technical compli- CA-2 SECURITY ASSESSMENTS ards compliance (Mohan mated reporting of violations, continuous monitor-
ance review V. O., 2016) (Callanan, ing and maintenance of a service catalogue with
2016) (Plant, 2019). tested and certified services.
64
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
65
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
stored in version control for reuse. The baseline can be defined and modified per
(Forsgren, 2018) application type.
Opportunity. Put applica-
tion code, system configu-
rations, application con-
figurations and scripts for
automating build in a ver-
sion control system
(Forsgren, 2018)
Opportunity. Integrate
performance baselines
into the continuous de-
ployment pipeline (Shahin
M. B., 2017)
66
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Evaluating Artifact
Introduction
The SecDevOps Capability Artifact described in detail in the previous section is created using a
step-wise approach. The initial design is derived from a detailed scientific literature review, combined
with the study of the controls covered by the applicable standards and frameworks. The outcome artifact
is validated through interviews with subject domain experts from diverse large regulated organisations
from varying industries. The final validation is performed against the recommendations proposed in the
recent Department of Defence “Enterprise DevSecOps Reference Design” report (Lam, 2019).
The process of deriving the artifact is summarized in Error! Reference source not found..
As was mentioned earlier, the scope of this study is limited to a specific company segment: large
regulated organisations. Therefore a set of eight interviews was set up with employees of varying organ-
isations in Belgium and The Netherlands that fall under this category. The goal is to validate the findings
from the scientific literature study and to compare them against the practical experience of the inter-
viewees. It allows to filter out the controls that are less suitable or inapplicable to the organisations con-
sidered in the scope of this research.
For confidentiality reasons, the names of the companies and the interviewees are not disclosed.
However, all interviewees were selected based on the following criteria:
• Over ten years of experience in Digital / IT industry in at least one of the following roles:
Security Architect / Responsible, Delivery/Release Manager, Information Technology Ar-
chitect, Integration Engineer or similar roles;
• Minimum five years of experience in a large (more than 1,000 employees) organisation in
a sector that has strict regulatory compliance requirements, such as banking, insurance,
medical, etc.
The interviewees were selected to provide a mixture of security oriented and operational profiles,
to highlight (Sec)DevOps compliance from different perspectives and to find the optimal set of controls,
which allow to optimize the compliancy requirements as well as the operational needs for speed and
flexibility.
67
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• An interviewee receives a draft version of the SecDevOps Capability Artifact, containing the
selection of security control objectives and controls from ISO 27002 and NIST SP 800-53
impacted by (Sec)DevOps, (Sec)DevOps control objectives and the corresponding controls
obtained from the literature review. The draft artifact is provided to the interviewee at least
a month before the interview takes place and is accompanied with the necessary explana-
tion.
• A semi-structured interview is organized with a duration varying from 1 to 4 hours, de-
pending on the interviewee availability and the interviewee progress. The goal of the inter-
view is, first of all, to obtain the feedback on the draft SecDevOps Capability Artifact and to
refine the selection of the security control objectives and controls impacted by
(Sec)DevOps. Secondly, the (Sec)DevOps controls proposed in the literature are discussed
to determine if they are applicable and effective in regulated environments. Finally, the list
of controls is extended, taking into account the interviewee practical experience with
(Sec)DevOps.
• The following questions are covered in the indicated order and the input from the interview
is recorded (recordings are available on demand):
68
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• How are Information Security roles and responsibilities impacted by the introduc-
tion of SecDevOps? Is there impact at all?
• How to control access to the source code in the SecDevOps environment which pro-
motes sharing?
• What is the best way to segregate the network while preserving the flexibility?
• Is there a change in the role of Security within the Development Lifecycle due to
SecDevOps?
69
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Annex B. Interview Transcripts, at the end of this report. Finally, the draft SecDevOps Capability
Artifact was modified and extended with the input obtained from the subject matter experts.
The reason for choosing DoD Enterprise DevSecOps Reference Design as a means for the validation
of SecDevOps Capability Artifact is that this recent report provides a practical reference for the imple-
mentation of the DevSecOps capability within a regulated organisation, in line with NIST Cybersecurity
Framework, which is a high-level framework building upon specific controls and processes defined by
NIST SP 800-53, COBIT 5 and ISO 27000 series.
The table below shows how the findings of this research, defined by SecDevOps Capability Artifact,
are reflected and confirmed by the report.
• Part of the tasks from the Security depart- • There are nine DevSecOps software lifecycle
ment should be moved to the SecDevOps phases: plan, develop, build, test, release,
teams deliver, deploy, operate and monitor. Secu-
• Define the role of the Security Champion rity is embedded within each phase
within the agile organisation • Change the organisational culture to take a
• Work according to “5 amigo’s” principle, holistic view and share the responsibility of
stimulating collaboration between Busi- software development, security and opera-
ness, Development, Testing, Security & Op- tions
erations • Break down organisational silos. Increase
• Security should be positioned at the front of the team communication and collaboration
the development pipeline in all phases of the software lifecycle
• Shift the Security role from operations to
process and controls design
Segregation of duties
• Automate the production deployment pro- • Most of the processes should be automata-
cess so no person can execute the deploy- ble via tools and technologies
ment without passing the automated con-
trols first
70
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• Well-defined policies regarding infor- • The “big bang” style delivery of the Water-
mation exchange across teams should be in fall process is replaced with small but more
place to prevent security threats due to col- frequent deliveries, so that it is easier to
laboration change course as necessary. Each small de-
• Risk assessments should be performed livery is accomplished through a fully auto-
from the first planning stage and continu- mated process or semi-automated process
ously before every iteration with minimal human intervention to accel-
• Manage security controls as an API or “Secu- erate continuous integration and delivery
rity/Compliance as a Code” • Build a culture of safety by sharing after-ac-
• Design for failure tion reports on both positive and negative
• Standardize security controls and security events across the entire organisation.
assurance process Teams should use both success and failure
• Adopt convention over configuration to re- as learning opportunities to improve the
duce the complexity of CI/CD system system design, harden the implementation,
• Use Service- oriented architectures and mi- and enhance the incident response capabil-
cro-services architecture to build systems ity as part of the DevSecOps practice
that can easily be deployed in multiple en- • A software system can start with a Continu-
vironments ous Build pipeline, which only automates
• Make the engineer accountable for the code the build process after the developer com-
in production he created mits code. Over time, it can then progress to
• Use automated activities to simplify and to Continuous Integration, Continuous Deliv-
improve software development ery, Continuous Deployment, Continuous
• Secure the pipeline by restricting the attack Operation, and finally Continuous Monitor-
surface of the code base ing, to achieve the full closed loop of
• Security teams should be informed about DevSecOps. A program could start with a
the challenges faced by operators and de- suitable process and then grow progres-
velopers, and vice versa sively from there. The process improvement
is frequent, and it responds to feedback to
• Put the responsibility for handling the secu-
improve both the application and the pro-
rity considerations within the scope of the
development team cess itself
• Perform automatic patching • Accept that change can be required at any
time, and all options are available to achieve
• Let a Quality Assurance officer to support
it. Fail fast, fail small, and fail forward. An ex-
and monitor the DevOps process from a
ample of failing forward is when a developer
high-level view
finds that a release does not work. Then in-
stead of restoring the server to its pre-de-
ployment state with the previous software,
the developer’s change should be discrete
71
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• Any privileged accounts (such as root and • Offer open access across the organisation to
the local administrator accounts) should be view the activities occurring within the au-
monitored very closely (or ideally be disa- tomated process and to view the auto-gen-
bled completely) erated Artifacts of Record
• Use timed access rights by allowing devel-
opers to request timed passwords
• Grant employees dynamic access rights per
sprint by assigning them specific responsi-
bilities and access rights every time
• It is often sufficient to give DevOps engi-
neers two accounts for the different envi-
ronments
• Maintain a separate CI server for develop-
ment done outside the company
• Provide fine -grained account management
with different levels of access
• All changes in production should preferably
be executed by CI/CD pipeline
• Organize release pipeline admins as a sepa-
rate team
Change management
• Integrate Change Management into the Re- • Make many small, incremental changes in-
lease Management process stead of fewer large changes. The scope of
• Changes should only be applied to produc- smaller changes is more limited and thus
tion using a process that forms part of a de- easier to manage
ployment pipeline • The DevSecOps lifecycle is an iterative
• Integrate automatic change checks into the closed loop. Start small and build it up pro-
deployment pipeline that halt the process if gressively to strive for continuous improve-
necessary
72
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• If CAB is in place, allow CAB leadership de- ment. Set up human intervention at the con-
termining which DevOps-related changes trol gates when necessary, depending on the
do not require the normal process rigor and maturity level of the process and the team’s
may bypass the traditional controls confidence level in the automation. Start
• Allow teams communicate themselves with with more human intervention and gradu-
other teams if they are implementing a ally decrease it as possible
change that might impact other systems • AO (Authorizing Official) should consider
• Foresee mechanism for automatic software automating the Authority to Operate (ATO)
roll back if any code changes leave the soft- process as much as possible
ware in a less than fully functioning state • The tags added to artifacts in the artifact re-
• Instead of releasing individual changes, we pository help guarantee that the same set of
should think more in terms of releasing a artifacts move together along a pipeline
product as a whole • Push down or delegate responsibility to the
lowest level:
• Strategic: This is related to the Change
Control Board (CCB) or Technical Review
Board (TRB); it involves “Big Change” un-
structured decisions. These infrequent and
high-risk decisions have the potential to
shape the strategy and mission of an organ-
isation.
• Operational: (Various Scrum) Cross-cut-
ting, semi-structured decisions. In these fre-
quent and high-risk decisions, a series of
small, interconnected decisions are made by
different groups as part of a collaborative,
end-to-end decision process.
• Tactical: (Global Enterprise Partners
(GEP)/Product Owner/Developers Activi-
ties) Delegated, structured decisions. These
frequent and low-risk decisions are effec-
tively handled by an individual or working
team, with limited input from others
• DoD Centralized Artifact Repository (DCAR)
holds the hardened VM images and hard-
ened OCI compliant container images of:
DevSecOps tools, container security tools,
and common program platform compo-
nents (e.g. COTS or open source products)
that DoD program software teams can uti-
lize as a baseline to facilitate the authoriza-
tion process.
Capacity management
73
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• Integrate process monitoring tools into the • Actionable security and quality assurance
deployment pipeline (QA) information, such as security alerts or
• Define monitoring metrics QA reports, must be automatically available
• An effective, dynamic inventory must to the teams at each software lifecycle phase
quickly and continuously discover and vali- to make collaborative actions possible
date new assets, or changes in existing as- • Governance activities do not stop after ATO
sets, as soon as they appear online but continue throughout the software lifecy-
• Have a log-driven, log-specific architecture cle, including operations and monitoring.
in the middle of the applications DevSecOps can facilitate and automate
• Use quality assurance monitoring to check many governance activities
if the code is following the defined standard
• Use tooling to process logs
Segregation in networks
Outsourced development
• Foresee service level agreements with • The program should have a formal Service
hosting providers that include security Level Agreement (SLA) with the underlying
• With off-shore development, the 3rd party infrastructure provider about what services
needs to be on the same platform, using the are included and what authorizations can be
same tooling inherited. This affects the status of applica-
• With cloud, we rely on the configuration of ble assessment procedures and prepares
the supplier, who provides the assurance the stage for inheritance into the operations
environment and application
74
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
• Always integrate version control into • The instantiation of the DevSecOps environ-
DevOps processes. This can be done by us- ments can be orchestrated from configura-
ing version control systems such as Git or tion files instead of setting up one compo-
Subversion. nent at a time manually. The infrastructure
• Start each new build from an up-to-date im- configuration files, the DevSecOps tool con-
age housing the latest patched operating figuration scripts, and the application run-
system and middleware. time configuration scripts are referred to as
• Ensure that the state of production systems Infrastructure as Code (IaC)
can be reproduced (with the exception of • Both IaC and SaC are treated as software
production data) in an automated fashion and go through the rigorous software devel-
from information in version control. opment processes including design, devel-
• Considered an approach that kept track of opment, version control, peer review, static
which versions were deployed and tested analysis, and test
together, and then deploying those applica-
tions together as “snapshots” or “release
sets”.
• Deployable images should be available,
with each Operating System and the corre-
sponding configurations that can be de-
ployed automatically
75
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Conclusion
Research Findings
This research investigates a challenge large regulated organisations are exposed to: how to in-
crease the speed and quality of delivery, using DevOps, while remaining compliant with the applicable
security standards and regulations? This question is particularly relevant since the speed and flexibility
propagated by DevOps are often contradicting the core controls addressed by security standards: segre-
gation of duties, change control, network segregation, etc. On the contrary, certain DevOps objectives are
simplifying the implementation of security controls through development process automation, continu-
ous monitoring, earlier integration of security requirements into design, etc.
In the previous chapters we investigated whether DevOps is at the end an opportunity or a risk to
security compliance. The answer to this question definitely depends on the compliance requirements
applicable to a specific organisation, but in general, certain aspects of DevOps are an unmistakable ben-
efit, while other aspects require a detailed review and fine-tuning in order to comply with security regu-
lations.
We selected ISO 27002 and NIST SP 800-53 as the reference security standards for this research
effort, since both standards are widely known an applied within European as well as US organisations.
Furthermore, both are sufficiently detailed in terms of the controls covered to be able to relate them to
DevOps controls. For each standard, a study of the controls is conducted to evaluate the impact of DevOps.
At the same time, an extensive literature review of almost 100 scientific papers provided a good view on
the relevant DevOps control objectives and the corresponding controls. It allowed to perform the map-
ping between the impacted security controls and (Sec)DevOps controls objectives. From this mapping we
76
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
learned which (Sec)DevOps control objectives impacted the security compliance in either positive (Op-
portunity) or negative (Risk) way. Furthermore, the literature provided a good overview of (Sec)DevOps
controls that can mitigate the abovementioned security risks. The results of the literature review are
validated against the results of a number of interviews with subject matter experts, leading to the for-
malisation of the conclusions in SecDevOps Capability Artifact. The capabilities highlighted in the artefact
are shown in Figure 10. The figure indicates the relationship between the major capabilities and the
phase in the SecDevOps Gartner’s toolchain where they belong. Finally, DoD Enterprise DevSecOps Ref-
erence Design recommendations are reviewed in the light of SecDevOps Capability Artifact to verify that
they effectively confirm our findings.
What is true for many transformations, also applies in the case of SecDevOps - it impacts the secu-
rity compliance from People, Processes and Technology perspectives:
• People. SecDevOps significantly changes the way in which security is integrated into the
development process, which requires the introduction of new types of security roles within
the organisation. Also the focus of the existing roles is shifting and, therefore, there is a need
for training at different levels: from generic security training for team members to dedi-
cated specialist training for technical profiles (e.g. developers, architects, operations, etc.).
• Process. SecDevOps is often introduced in the organisation together with the move to Agile
software development. Therefore, it is not always easy to distinguish the process impact of
SecDevOps as opposed to the impact of Agile. In general, it impacts the traditional segrega-
tion of roles, the rights each role gets within the software development and deployment
chain, as well as the way in which software is designed and released.
• Technology. The most significant contribution of SecDevOps to the changing way we do
security is due to the extensive use of automation. Automation and technology create al-
most endless possibilities to speed up and to improve the quality of the traditional time
consuming processes such as documentation, change management and control, capacity
management, event logging, monitoring and reporting. SecDevOps allows to automate
many of these steps through CI/CD, build intelligent controls and alerts within the deploy-
ment pipeline and allow for automatic actions in case of failure (e.g. automatic release roll-
back).
The major findings of our research suggest the controls that allow to incorporate SecDevOps into
the organisation, which traditionally builds its processes around compliance requirements. The controls
suggest how to satisfy these requirements without sacrificing too much on the flexibility and speed that
form the major advantage of SecDevOps in first place. The most important research findings can be sum-
marized as follows:
• Part of the tasks of the Security department should be moved to the SecDevOps teams. The
latter must be able to take the responsibility for Security Gating, by using security testing au-
tomation, as well as for the iterative risk assessments and threat modelling. It will allow to
position the security activities earlier in the development pipeline and to guarantee an itera-
tive review and adjustment. A Security Champion role within the team should be able to assist
in this new activities, with the support of the Security department. Obviously, this shift in
responsibilities requires the acquirements of necessary technical skills by the Security people
and of security knowledge by SecDevOps team members.
• The segregation of duties becomes less “a people job”, and more an automated process. At the
end, the CI/CD pipeline should be able to perform all the necessary checks to assure the qual-
ity of the released code and to avoid any potential fraud. However, automation will often hap-
pen gradually, where in intermediary phases manual check are still required (e.g. peer code
review, Change Advisory Board (CAB)). These manual controls can be removed step-wise. For
example, CAB may decide which changes are allowed to be pushed automatically through the
pipeline, based on the available automated controls, the severity of the change and on the
previous experience with similar releases. On the other hand, it is important to recognize that
77
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
a different form of the segregation of duty will be put into place: if the pipeline is becoming
the crucial/only form of controls developers should be prohibited from tampering with the
pipeline code and configuration.
• SecDevOps requires new standards for software design: instead of releasing changes, prod-
ucts will be released instead. This leads to a new form of workload management, where fea-
tures are grouped around the release of a product. It means that the dependencies between
different products should be minimize as well. Service oriented architectures and micro-ser-
vices are very suitable for decoupling large systems into smaller independent units that can
be deployed separately in multiple environments.
• Finally, automation is the core of many recommendations, meaning that the full gain of
SecDevOps can only be achieved when the majority of tasks can be done within the appropri-
ate tooling, linked into an automatic CI/CD pipeline. The automation varies from access con-
trol (e.g. RepoMan), to automatic documentation (e.g. JIRA, GIT), to compliance check (e.g.
Sonar, Fortify), monitoring and alerting (e.g. Splunk).
Research Limitations
The research covered in this book has a number of limitations. First of all, due to time and geo-
graphical constraints, the number of participants at expert interviews is limited to less than a dozen com-
panies in Belgium and The Netherlands. The background of experts varied from Security Architects to
Delivery Managers and provided us with the advantage of having varying points of view on the addressed
problem. On the other hand, the responses on some questions provided sometimes contradictory results,
which were filtered out in the Artifact by relying upon the opinion of the majority of the experts, as well
as on the literature references.
Furthermore, as suggested by the Design Science Research approach (see Research Methodology),
the artifact should be developed, and its effectiveness demonstrated and evaluated in practice. These
steps were not completed due to the lack of time and the research was limited to the requirements col-
lection phase.
Future Research
SecDevOps is still a relatively new paradigm and many commercial organisations are in the process of
discovering the best way to adopt it to their requirements and limitations. Therefore, a periodic review
of the status and the evolution in the domain of SecDevOps is required. The following elements may def-
initely become interesting subjects for future investigation and research:
• Extend the SecDevOps Capability Artifact with a Capability Maturity Model. The artefact
as specified in this study is defining a number of controls that are crucial for the integration
of the SecDevOps capability into a regulated organisation. However, there is no information
of which controls are required for which level of SecDevOps capability maturity. Presumably,
not all organisations will require the same controls que to different background and complex-
ity of the organisational structure.
Figure 11 shows a Capability Level model described in ISO 15504 standard, which can per-
fectly serve security maturity measurements. This model can be used for the future research
to determine which SevDevOps controls belong at which level of organisational capability
78
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
maturity. The classification of controls, can, for instance, be validated through additional ex-
pert interviews.
• Expansion of the scope of the best practices for the implementation of SecDevOps out-
side the boundaries of large regulated organisations. It may provide interesting input and
insight into how the SecDevOps implementation varies across different companies, depend-
ing on industry, sector (e.g. non-profit, public, commercial) and the size of the organisation.
• Investigation of practical aspects of the implementation of the SecDevOps Capability
Artifact, converting the high-level recommendation provided in this study into actual
operational models. This study is limited to the high-level control description and recom-
mendations. However, there are plenty of opportunities to refine these suggestions from the
point of view of the practical implementation. For instance, for each of the covered aspects a
model / framework can be proposed on how exactly they can be integrated into the organi-
sational processes.
• Implementation and validation of the SecDevOps Capability Artifact within one or sev-
eral reference organisations. These findings will allow to finetune the suggestions summa-
rised in this study and to improve it with practical examples.
79
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
References
Aljundi, M. (2018). Tools and Practices to Enhance DevOps Core Values (Master's Thesis). Lappeenranta :
Lappeenranta University of Technology, School of Business and Management .
Bass, L. H. (2015). Securing a Deployment Pipeline. (pp. 4-7). Florence, Italy: Proceedings of the Third
International Workshop on Release Engineering.
Betz, C. O. (2016). The Impact of Digital Transformation, Agile, and DevOps on Future IT Curricula. Bos-
ton, MA, USA: SIGITE’16.
CA Technologies. (2014). Devops: The Worst-Kept Secret to Winning in the Application Economy.
Callanan, M. S. (2016). DevOps: Making It Easy to Do the Right Thing. IEEE Software, vol. 33, no. 3, 53-59.
CFO Forum. (n.d.). Understanding and managing the IT risk landscape: A practitioner’s guide.
https://www.thecroforum.org/2018/12/20/understanding-and-managing-the-it-risk-land-
scape-a-practitioners-guide/.
Chivers, H. P. (2005). Agile Security Using an Incremental Security Architecture. Extreme Programming
and Agile Processes in Software Engineering, Volume 3556.
Clager, J. R. (2016). 2016 IEEE 40th Annual Computer Software and Applications Conference (COMP-
SAC). Mitigating an Oxymoron: Compliance in a DevOps Environments, (pp. 396-398). Atlanta, GA.
Colavita, F. (2016). DevOps Movement of Enterprise Agile Breakdown Silos, Create Collaboration, In-
crease Quality, and Application Speed. Proceedings of 4th International Conference in Software
Engineering for Defence Applications Advances in Intelligent Systems and Computing.
Compliance Forge Website. (n.d.). Which framework is right for my business? NIST Cybersecurity Frame-
work vs. ISO 27002 vs. NIST 800-53 vs. Secure Controls Framework. Retrieved from
https://www.complianceforge.com/faq/nist-800-53-vs-iso-27002-vs-nist-csf.html
Derksen, B. N. (October 2018). Agile Secure Software Lifecycle Management Secure by Agile Design. Lei-
den: Secure Software Alliance.
Farroha, B. F. (2014). A Framework for Managing Mission Needs, Compliance, and Trust in the DevOps En-
vironment (pp. 288–293). Baltimore, MD, USA: Proceedings of the 2014 IEEE Military Communi-
cations Conference.
Forsgren, N. H. (2018). Accelerate: The Science of Lean Software and Devops: Building and Scaling High
Performing Technology Organisations. OR, United States: IT Revolution Press.
Gill, A. L. (2017). DevOps for Information Management Systems. VINE Journal of Information and
Knowledge Management Systems.
80
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Harkins, M. (2013). A New Security Architecture to Improve Business Agility. In Managing Risk and In-
formation Security.
Hevner, A. (2007). A Three Cycle View of Design Science Research (pp. Vol. 19, Iss. 2, Art. 4.). In Proceed-
ings of Scandinavian Journal of Information Systems.
Hevner, A. M. (2004). Design science in information systems research. 75–105: MIS Quarterly 28 (1).
Hilton, M. N. (2017). Trade-offs in continuous integration: assurance, security, and flexibility. ESEC/FSE.
Laukkarinen, T. K. (2018). Regulated software meets DevOps. Information and Software Technology,
vol. 97.
Laukkarinen, T. K. (May 2017). DevOps in regulated software development: Case medical devices. (pp.
15–18). Proceedings of 2017 IEEE/ACM 39th International Con-ference on Software Engineer-
ing: New Ideas and Emerging Technologies Results Track (ICSE-NIER), IEEE.
Lin, T. (2016). Compliance, Technology, and Modern Finance. Temple University Legal Studies Research
Paper No. 2017-06.
Mattetti, M. S.-P. (2015). Securing the infrastructure and the workloads of Linux containers. (pp. 559–
567). Florence, Italy: Proceedings of the 2015 IEEE Conference on Communications and Net-
work Security (CNS).
McCarthy, M. A. (2014). A Compliance Aware Software Defined Infrastructure. (pp. 560-567). Anchor-
age, AK: Proceedings of IEEE International Conference on Services Computing.
McCarthy, M. H. (2015). Composable DevOps: Automated Ontology Based DevOps Maturity Analysis (pp.
600-607). New York, NY: IEEE International Conference on Services Computing.
Michener, J. C. (2016). 2016 IEEE 40th Annual Computer Software and Applications Conference (COMP-
SAC). Mitigating an oxymoron: compliance in a DevOps environment, (pp. 396-398). Atlanta, GA.
Mohan, V. O. (2016). SecDevOps: Is It a Marteking Buzzword? Salzburg, Austria: Proceedings of the 11th
International Conference on Availability, Reliability and Security (ARES).
Mohan, V. O. (2018). BP: Security Concerns and Best Practices for Automation of Software Deployment
Processes: An Industrial Case Study (pp. 21-28). Cambridge, MA: Proceedings of 2018 IEEE Cy-
bersecurity Development (SecDev).
Murray, A. (2015). The Complete Software Project Manager: Mastering Technology from Planning to
Launch and Beyond – Agile, Waterfall, and the Key to Modern Project Management.
Myrbakken H., C.-P. R. (2017). Software Process Improvement and Capability Determination. SPICE
2017. DevSecOps: A Multivocal Literature Review (pp. 17-29). Cham : Springer, Communications
in Computer and Information Science, vol 770.
Pastrana, M. &. (2019). Ensuring Compliance with Sprint Requirements in SCRUM: Preventive Quality
Assurance in SCRUM. In Advances in Computer Communication and Computational Sciences (pp.
33-45). Springer.
81
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Plant, O. (2019). DevOps Under Control: Development of a Framework for Achieving Internal Control and
Effectively Managing Risks in a DevOps Environment. University of Twente: Master Thesis for the
study programme MSc. Business Information Technology.
Rahman, A. W. (2016). In Proceedings of the Symposium and Bootcamp on the Science of Security (Hot-
Sos ’16). Security practices in DevOps (pp. 109–111). New York, NY, USA: Association for Compu-
ting Machinery.
Rindell, K. H. (2016). Case Study of Security Development in an Agile Environment: Building Identity
Management for a Government Agency. (pp. 556-563). Salzburg: 11th International Conference
on Availability, Reliability and Security (ARES).
Savor, T. D. (2016). Continuous Deployment at Facebook and OANDA. (pp. 21-30). Austin, TX: Proceed-
ings of 2016 IEEE/ACM 38th International Conference on Software Engineering Companion
(ICSE-C).
Schneider, C. (2015). Security DevOps - staying secure in agile projects. Amsterdam, Netherlands: Pro-
ceedings of OWASP AppSec Europe.
Shahin, M. B. (2016). The Intersection of Continuous Deployment and Architecting Process: Practitioners’
Perspectives. At Ciudad Real, Spain: 10th ACM/IEEE International Symposium on Empirical Soft-
ware Engineering and Measurement (ESEM).
Shahin, M. B. (2017). Continuous Integration, Delivery and Deployment: A Systematic Review on Ap-
proaches, Tools, Challenges and Practices. IEEE Access, vol. 5, 3909-3943.
Sharma, S. C. (2015). DevOps for Dummies. USA: Second IBM limited edition, John Wiley & Sons.
Stolt, S. N. (2013). Continuous Delivery? Easy! Just Change Everything (Well, Maybe It Is Not That Easy)
(pp. 121-128). Nashville, TN: 2013 Agile Conference.
Storms, A. (2015). How security can be the next force multiplier in devops. San Francisco, USA: Proceed-
ings of RSAConference.
Vael, M. (2019, February 22). Enterprise Information Security Architecture : How to design rock-solid
security directly into information systems. Antwerp, Belgium.
82
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Yasar, H. (2017). Implementing secure DevOps assessment for highly regulated environments. (pp. 1–
3). Reggio Calabria, Italy: Proceedings of the 12th International Conference on Availability, Reli-
ability and Security - ARES ’17.
83
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
84
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
85
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
86
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
87
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
88
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
89
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
90
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
91
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
92
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
There is absolutely an impact of SecDevOps on the traditional security roles and responsibilities...
Defining the role of the Security Champion within the agile organisation is becoming extremely im-
portant: this role combines the responsibility of ensuring the appropriate Security Gating with support
for testing and security automation within the agile team. The Security Champion himself is supported
by a Cyber Defense team for the execution of his tasks. It creates scalability within Cyber Defense teams,
since it is not feasible to foresee a dedicated Cyber Defense professional within each team/squad. But at
the same time, the question of training the right skills of the Security Champion becomes prominent. In
fact, instead of the traditional “3 amigo’s” Agile principle, where the work is examined from 3 different
perspectives (Business, Development & Testing), we need to start speaking in terms of “5 amigo’s”, add-
ing Security and Operations into the list.
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
If there are skills to execute interchangeable tasks, such as a developer with the knowledge of test-
ing techniques, tasks can be performed by the same person. But often the limitation is the absence of the
required skills combined by the same profile, and in that case separate rolls are necessary. A substitute
to this approach can be the automation. For instance, a developer may launch an automatic test suite
execution.
Another point regarding the segregation of duties is the restriction of the full access. For example,
in critical organisations, a developer is not allowed to have a full access to the deployment pipeline, from
the development to the production environment, because of the regulation requirements. In these organ-
isations, access allocation based on the “need to know”-principle still remains valid. Access restrictions
are of less importance in non-critical industries (e.g. Netflix), where the full-access setup maybe be per-
fectly acceptable.
Generic IAM role-based access rights should be defined, in line with the responsibilities within the
deployment pipeline. Developers can eventually have read access rights to production, but any interven-
tion required for release in production should either be done by the Operations personnel or deployed
fully automatically. Here the impact of the company culture is of the major importance: there should be
no sanctions for making mistakes, but issues should be openly discussed to stimulate change process and
to steepen the learning curve.
SecDevOps may facilitate data sharing and knowledge exchange between the teams if sharing is
happening in a safe way: all data is stored in a shared secure platform, such as O365.
Patching automation may also be a major advantage, but should be managed through rings of se-
cure access: applicative rings of patching should be foreseen, where the right to patch depends on the
application criticality. For example, nCircle is a tool that automatically checks the compliance and patch
level against predefined benchmarks, such as CIS security benchmarks (system, configurations, data-
bases benchmarks). At the same time, it is important to design for failure and to make use of the available
automatic tooling, such as Monkey Testing and Fuzziness Testing, to verify the reliability and resilience.
93
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
It is important to understand that SecDevOps paradigm does not equal to the “garage attitude” and
requires the adoption of standardization and process simplification. For instance, the creation of archi-
tecture generic services, using Archimate, facilitates standardization and allows to externalize the secu-
rity review (by third party). An overview of controls will largely take away the need to do individual
control reviews (e.g. when buying external cloud services). It provides a complete view on the level om
compliance, demonstrating the overall level of security. From the security control overview, we can move
towards the security assurance through validation and certification (eventually, by accredited third par-
ties). Architecture should be applied continuously and consistently, leading to continuous security assur-
ance. Here an important role is dedicated to the Security Architect, who supports the operational Security
Teams, which in their turn support Security Champions in different teams. The Architect is responsible
for translating high-level risk models / security requirements into more targetable controls, providing
the third line of support (as in ITIL 3-level support model).
Which information security awareness, education and training is required to support the
new way of working?
It is important to introduce Joiners/Leavers generic training for people joining or leaving the com-
pany. Specific, more detailed, training is required for the Security Champion, including a “Survival Kit”.
Technical training is required for developers, to make sure they are aware of the best practices for the
secure software development.
How is the privileged access to be managed in order to make the optimal use of the flexibility
of SecDevOps?
Balance between flexibility and security is required. Tooling may help to simplify the task: Repo-
Man is a tool that uses activity history to reduce account privileges over a certain period of time.
How to control access to the source code in the SecDevOps environment which promotes
sharing?
Pipeline provides access to only specific source code by a specific tool. Machines and people should
have the same access rights, limited in the same way.
Direct changes to components during the development process are difficult to achieve in a regu-
lated environment. There is often no complete view on the dependencies. Thus there is no guarantee that
that there is no security impact on the underlying components. However, there is no longer need for a
Change Advisory Board, but instead the Change Management process should be incorporated into the
Release Management process. Change Management is becoming a slave of the Release Management.
94
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Instead of releasing individual changes, we should think more in terms of releasing a product as a
whole. Through the pipeline the following product types can be released: compiled programs, packages,
container of Virtual Machines.
There are different mechanisms to include Change Review in the Release process. However, Code
Review is definitely NOT one of them, since it is too slow and more suitable for the Waterfall methodol-
ogy.
In SecDevOps provisioning of new servers is facilitated by cloud capacity management. You need
to have a dynamic infrastructure environment. In a static environment the concept of SecDevOps is diffi-
cult to implement.
We can speak about more integrated logging & monitoring thanks to CI/CD pipeline. API’s and
event triggers allow us to use these possibilities in a more consistent way. Logging & monitoring should
be technology neutral.
Should the installation of software be restricted or not, in order to boost the performance?
In a contained environment the tooling choice can be delegated to the development team to im-
prove the software delivery performance. However, in a wide network, the choice should be limited to
protect the eco system. Furthermore, the free choice is not suitable for the regulated environments.
What is the best way to segregate the network while preserving the flexibility?
Development environment should be separated from Production. This allows to download and use
random products in the Test environment.
No real impact, maybe only facilitation through automation. Automation may be an enabler for the
outsourcing.
Further test automation is a large advantage of SecDevOps. The results of automated tests should
be used during the gating process.
We need a different approach for configuration management since SecDevOps offers new possibil-
ities. For instance deployable images should be available, with which Operating System and the corre-
sponding configurations can be deployed automatically. Easily modifiable configuration templates
should be available for reuse. The baseline can be defined and modified per application type.
95
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Interview 2
How are Information Security roles and responsibilities impacted by the introduction of
SecDevOps? Is there impact at all?
Security should be positioned at the front of the pipeline. Often Security does not have enough
bandwidth, is reactive and too late. Therefore, a merge between Security and Enterprise Architecture is
a must. Security should steer the design and implementation process using SABSA principles: test plan-
ning, certificate management, judge & advocate principle, segregation of duties, etc. The role of Security
as the police agent should be replaced by the effective design of secure architectures.
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
We are dealing with the Logical and Implementation/Physical models which are not the same. If
we have a change without security and functional impact, the change may proceed without a formal
change process. The only problem with this approach is that developers always like to decide on the im-
pact of a change. However, they are not always capable to make a correct judgement due to the complex-
ity or the lack of knowledge. Therefore, a Gatekeeper role is necessary. This role understands the business
logic and the data impact, and is able to perform a security assessment. Changes that are stopped by the
Gatekeeper return to the backlog, which is reviewed on the daily basis. The obvious disadvantage of this
approach is an extra delay of changes.
Tooling allows the development to remain current/up-to-date all the time (e.g. due to automatic
patching, configuration management). Developers should use standard tools, while following rules, with
predefined controls, defined by the Architecture. The Architecture & Security policies should be imposed
at 3 layers: infrastructure (infrastructure As A Service), application (including data), access & presenta-
tion layer where data is produced.
Externally developed code should be pre-validated by third parties. There should be much more
proactive security automation built into the CI/CD pipeline, especially in Dev and Test.
When cloud development is used, it is very important to build software for multi cloud context and
to use solely cloud agnostic services to avoid cloud provider locking.
Which information security awareness, education and training is required to support the
new way of working?
Introduce Welcome and Goodbye procedures for employees. Specific training for developers is also
required. It should allow to score the developer’s knowledge. Today there is a shortage of developer’s
security knowledge.
It is also important to foresee a general data classification training: which data structures and ob-
jects are sensitive or not. A lot of people do not know the data regulations and their consequences.
Generally, a Security Onboarding training with basic security information should be foreseen. The
training should be given to Development & Operations, since the latter have more need and more gain
from this information. This training can be provided by the Architecture & Compliance and needs to be
foreseen on a regular basis.
96
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
How is the privileged access to be managed in order to make the optimal use of the flexibility
of SecDevOps?
Privileged Access Management (PAM) is a complicated domain to provide the overall solution. The
following techniques are often applied: segregation of duties; role based access control to applications
(provide least privileged access); using applications such as FIN and CyberArc to manage the access to
applications centrally, but not every application support this functionality and it can be a single point of
failure.
Pipeline has access to the source code, since otherwise you get issues with troubleshooting.
How to control access to the source code in the SecDevOps environment which promotes
sharing?
Developers do not have access to the source code in production. Supervisory role of a Team Lead
is foreseen instead. Team Lead can see all production data, but cannot modify any code.
Semi-automated pipeline should be in place, because not all changes can go life without an ap-
proval. In this pipeline 65% of changes should be processed fully automatically, the rest should require a
“check the box” approval. When introduced for the first time, the change should fall into the second cat-
egory, but as it gets known, controlled and finetuned, it can move to the fully automated segment (gradual
learning).
On the other hand, if a developer is authorized to release certain changes to production, he may
proceed without specific checks. However, there should be a strict process to obtain these rights.
4-eye principle is often applied to do code reviews before release: is code documented, is test re-
port available & confirmed, … All such changes are bundled and released once a week. This process is not
applicable to incidents.
If there is an issue in production, the analysis is performed by the operations, not by development.
Development cannot perform any modifications in production (segregation of duties).
Change Management is still the heaviest operational process, which is not OK for SecDevOps. We
should strive towards a leaner process for trivial changes, but there are still too little changes falling in
this category. Most of the changes require a full-blown approval process. We foresee today a weekly re-
lease process, but very few changes pass automatically through the release. There is a lot of fear for non-
compliance, customer dissatisfaction, personal responsibility and job protection by operational teams.
Independent of the available tooling, we still largely rely on the good will of the developer/tester/…
to write decent documentation. Architecture defines the code principles and the tools to address the doc-
umentation, but this approach has not proved to be successful in the past. Largely due to the variety of
tools.
We designed a wrapper tool (interpretation script) that requires as an input a code library name
and a programming language. The tool knows the format of the comments in the code for each program-
ming language and checks automatically if the code contains a sufficient amount of documentation. Be-
fore every release to production the result of the wrapper is checked.
97
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
However, it is important to mention that not every tiny change can be fully documented. Sometimes
changes are performed weekly on a particular component. On the other hand, all important objects that
are frequently manipulated should be well documented in an Architecture Toolbox. If, for instance, a new
attribute is added to the object, there will be detailed documentation foreseen (who requested the
change, reason, approver, etc.). Team leads will verify the availability of the documentation before the
release in the production.
Capacity management is a trilogy: compute (memory), network & storage. Network is very im-
portant: it deals with bandwidth consumption. We defined acceptable boundaries for each of these pa-
rameters. If the usage falls within the boundaries, the request is automatically processed. With the initial
request submission, where the capacity is not known in details, the total of the request is evaluated on
the consumption. It requests pre-investment, because the capacity should be foreseen on premises. It is
different for the cloud, where the volumes are agreed contractually upfront and only the total envelope
of the resources may not be exceeded. To avoid waste, we monitor the effective usage, and we can down-
size to avoid overprovisioning.
Environments with multiple time zones need to be treated carefully, since the capacity changes a
lot over time.
If there is no automation, you cannot do auto provisioning. There is often the whole team assigned
for the capacity management, consuming a lot of head count with incorrect data. Automatic provisioning
avoids this headcount issue.
A lot of point solutions were previously in place, which required some optimization. Splunk engine
helps to do data selection and aggregation on continuous basis with access to the reporting. There was
successful correlation implemented between the tooling: service availability metrics, KPI’s. Now factual
and actual information can be collected, instead of assumptions.
There are a lot of technology architecture events, but also application events can be monitored (lots
of export possibilities) in a consolidated environment. Applications, such as WorkDay, address this open-
ing. They have analytical capabilities out of the box, included in the subscription cost. Event processing
should be treated carefully since it can change rapidly within the application itself.
MuleSoft, SAP BI/BO can monitor integration and end-point connections. Mainly technology is cur-
rently consolidated and monitored in a coherent way: e.g. access attempts to T1 applications.
Should the installation of software be restricted or not, in order to boost the performance?
Simplification is cost efficient, allows easy management and is an enabler for automation. There
are standard public hosting scenario’s defined for the cloud but also for on premises installation. Interop-
erability matrices exist that ensures that everything works together, with preferred or recommended
versions.
For CI/CD there are much more different tools where you cannot work in the previously mentioned
way. Here we differentiate between preferred and recommended tools. There is also a collection of mini-
tools defined, which are included in the category ‘Tolerated”. Each time a new tool appears, it is reviewed
for its capability to be integrated into the environment. If there is a better alternative, new one will be
rejected to make the environment containable. However, there are always tooling coming into the system
through another purchased product, which makes that the list with allowed tools is growing.
98
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
For intermediary security tools we check if it is supported. Exclusive or exotic tools are reviewed
from the security perspective: can we support them?
What is the best way to segregate the network while preserving the flexibility?
In company multi-tenancy context, network segregation is a difficult discussion, since we are deal-
ing with different independent entities to be managed together. The technology is not ready to fully sup-
port the segregation in the virtual context.
If you look at software defined data centers, they are stretched into the cloud. The security context
in multi-tenant environment (cloud or on-premises) is very complicated: you have sensitive segments
(where you install backup & recovery); segments dedicated to outsourcing & service providers; segments
for collaboration & employee context; elevated segments (administrators), … There different tools that
support enforcement, but there are still a lot of gaps. It is work in progress: flow enforcement, micro-
segmentation, etc. Architecture plays here a crucial role, since it is not possible to manage your segmen-
tation appropriately without the architecture.
The cloud does not make the segregation easier, because your application is connected to other
applications (SaaS, Paas). In cloud you set up EPC for one tenant, but it is not enough. To protect sensitive
applications, you still need micro-segmentation.
It is important to build compliance into software development. For example, if tools are not patched
or updated automatically, they become vulnerable to a man-in-the-middle attack. Therefore, only recom-
mended features/tools can be used in the CI/CD pipelines. Exceptions to these recommended tools
should be indicated and built into the security context.
Interview 3
How are Information Security roles and responsibilities impacted by the introduction of
SecDevOps? Is there impact at all?
SecDevOps has a very significant impact on security roles and responsibilities. Before there was no
real interaction between Sec, Dev & Ops due to the segregation of duties, but today to make teams actually
effective we need SecDevOps, whereby teams work together.
It is important to arrange security within the multiple teams, since, for example, Product Owners
are focussing today on business functionalities, but not on the (ab)use cases. If someone creates a use
case, a security person in the team sees immediately an abuse case
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
There is absolutely an issue for the compliance. In the past, in the banking sector the largest fraud
was conducted by programmers who were able to push their code up to production or could manipulate
the transactions (send a small amount of every transaction to their own account). There are different
ways to have a compliance bridge/fraud: if a programmer has access to development, test and production
environments simultaneously; developer having access to bank data of other customers; etc.
99
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
The integration of SecDevOps is only possible if a number of conditions are satisfied: 4-eyes prin-
ciple should be applied, where one team member is able to control the other team member; there should
be traceability through JIRA or other tooling; access to the run environment “through break glass” (for
issue handling), whereby security alarm is triggered when an access takes place; extensive logging &
monitoring; full CI/CD pipeline, including testing. These activities should be traceable through logging
for auditing purposes, to proof that there are sufficient controls to mitigate the risk.
There is a lot of change due to automation, but it is important to be cautious. For example, auto-
matic patching can lead to fraud detection false positives because of extreme distrust in these environ-
ments (you black list everything until it is explicitly white listed).
As the owner of the secure information / process you can trust in certification & accreditation of
third parties, but you should be able to check the whole flow end-to-end. The severity of the check de-
pends on the risk: for credit card company, for example, is the risk much higher than for a bakery.
Which information security awareness, education and training is required to support the
new way of working?
You always have a security awareness training within every organisation and you need to keep
doing it. However, the attitude and behaviour of people is the most important factor. For instance, an
employee gets yearly an awareness training and every year 10 to 20% of employees fail. It has everything
to do with the attitude and the environment (which partially creates an attitude). Awareness has a very
limited duration.
Security people often do not understand development specifics. Visa versa, developers expect to
receive a clear TO-DO list, while only translating OWASP will have 260 items, ISO 110 items, PCI 400
items, etc. So you need to create a real team with different profiles. We use Agile Skills Framework, which
advises to create COP (Community Of Practice). In each Agile team, one member is a participant of this
community of practice. For example, security participants provide COP information on vulnerabilities
and security requirements. Non-security participants need to translate this input in the domain in which
they are active (e.g. web portal, mobile, etc.). Different insight and expertise is needed in COP, since in the
database layer you have different vulnerabilities & controls than in the infrastructure layer. COP is nec-
essary because Security Departments are generally very small and it is not possible to foresee a dedicated
security expert for every Agile team. COP meets every couple of weeks.
How to control access to the source code in the SecDevOps environment which promotes
sharing?
It is a very important point, since the source code can easily be misused to incorporate malicious
piece of code. Access to the source code should be carefully managed via Identity & Access Management
(IAM).
Every Agile team is in constant change mode, but the question arises: what is the actual definition
of Done. Instead of the original old-fashion Change Board, we have Design Authority, where the Agile
teams need to collect a stamp from the Architecture, Infrastructure and Security.
Not every change needs to go through the Design Authority. There is a 4-eye principle, whereby
every change in the backlog should be described in JIRA and control teams, such as Security, should be
able to review the backlog and to take the decision on the further actions.
100
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Security requires documentation, while Agile teams don’t wish to document. I turned around the
situation by specifying which logs are relevant for Security. To provide this logging you need a certain
process and the definition of Done. This definition of Done covers the Security documentation require-
ments. It is of no importance if the process is documented in Visio or JIRA, as long as it is traceable. Doc-
umentation through logging has a huge advantage, compared to the traditional documentation, by show-
ing the actual, instead of the outdated, state.
Not really aware about this aspect. It is related to the availability of systems. Systems, processes
and data are plotted against CIAT rating and Security states the requirements regarding Confidentiality
& Integrity, while the colleague from Run states the requirements regarding Availability. These require-
ments are transferred to business and translated to Capacity Management. However, this flexibility is
limited by the interdependences, because different functionalities are often managed within different
clouds. There is not a single contract to manage the capacity across different providers, which add in its
turn to the complexity. Also the capacity of the network is the limiting factor. In short: you get the capacity
under the control on a single cloud tenant, but not on the cloud ecosystem.
There are more than 60 different loggings, which were created in the cause of the years. For exam-
ple, access logging is becoming very important for cybersecurity, but this logging generates a huge
amount of data. New tooling, like Splunk (based on Python, BigData science tool), help to process that
data. It allows you to create intelligence over the whole pile of loggings in the technology stack: pattern
recognition (user login, network access, etc.), exceptional situation detection, etc.
Hackers also use AI to undermine patterns, resulting in a cat & mouse game. Therefore, there is a
need for more BI / AI profiles in the security team.
Should the installation of software be restricted or not, in order to boost the performance?
We are coming back from the approach of allowing software introduction by teams, because you
get an explosion of different cloud tooling. They are all licensed and after a couple of years you need a
major rationalization exercise. In the banking sector there is no space for software installation freedom,
but in other domains it is potentially possible. The license cost in the latter organisation will, however,
be increasing dramatically.
One the other hand, if you have different tools across teams and you build a certain functionality
within a certain tool, other teams will need to integrate with this tool in order to reuse this functionality,
following “reuse before build” principle. The cost of this integration is very high.
You need to create space to learn something, but when the financial crisis strikes the rationalization
will take place and many of the tooling will have to go. Today we are still in the learning curve and should
give a certain freedom to the developers as a form of the investment in knowledge growth.
SecDevOps introduces automation that can help in detecting technical vulnerabilities, but it is a
complicated process, where multiple tools are involved. In the code and the tooling you need to build in
additional code. This responsibility lies with the team and it is not easy to achieve: consumes a lot of time.
What is the best way to segregate the network while preserving the flexibility?
101
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
From the Security perspective, it is great to have a network segregation to isolate threats, which
opposes the need of SecDevOps for extra transparency and flexibility. It is difficult to find a balance.
SecDevOps teams should be made responsible for the securing their own product. The built in se-
curity should correspond to the business requirements (PCI, regulation, etc.). Teams should be made re-
sponsible and accountable for the correct realization. On top of that, we need lines of defense to assure
the accountability: in Build (new development), as well as during the common operations (Run). Build
and Run are parts of the same ecosystem.
Before you had a central Security Department, having the full responsibility. Now you split the re-
sponsibility in “Federated security”: partially in teams and partially centrally.
Is there a change in the role of Security within the Development Lifecycle due to SecDevOps?
If you have multiple components running at the same time with the same purpose (non-optimal
reuse of components), you get more vulnerabilities to manage. This may lead to a potential bridge in the
wall. If a certain component has a security issue (e.g. Citrix Netscaler) you need to be able to track how
many of these components are in use.
There is a program running in scope of Agile, to outsource certain activities off-shore. There is not
really a change w.r.t. security, compared to standard outsourcing. The only difference, is when external
tools are used (e.g. cloud). Sufficient assurance needs to be obtained to assure compliance with the appli-
cable standards.
Compliance (GDPR, security, etc.) can be automated, since it is based on business rules, for which
business rules engine can be applied. These tools, however, do not work for all cloud parties.
You should have a configuration base: you need to know what you configure and how. If team is
working with separate tools, the team should take the responsibility to report its activities.
If you are using cloud services, such as AWS, you do not have much to manage regarding the con-
figuration management. In this case, you rely on the contract. Your configuration management becomes
contract management.
You configure in a number of local systems: configuration of infra (if not in the cloud), applications,
etc.
Interview 4
How are Information Security roles and responsibilities impacted by the introduction of
SecDevOps? Is there impact at all?
There are insufficient best practices defined for everyone to follow at this moment and there is still
a lot of movement. Therefore, knowledge sharing is very important: by means of champions, but also by
means of meet ups within and outside your own company. We discussed SecDevOps a lot with companies
102
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
who were already more advances than us, in order to learn how they addressed their SecDevOps imple-
mentation. It was easy to apply to our environment, since controls objectives around cloud and automa-
tion are often very similar for different companies, but the way you implement controls can vary. We
talked to younger companies with less rigid roots in old technology, to learn new methods we, as an older
company, were not thinking about. We talked to companies like Bol.com, Randstad, KPM: they were big
enough to compare with us and were also involved in the innovation process. The best practices obtained
in this way could then be implemented at NN. The information was shared between the teams internally,
including Risk and Security departments.
In each team you have people who are willing to change and the others who hold on the old way of
working. You need the first category to convince the last category. For SecDevOps team it was very diffi-
cult to convince the Security team, since the Security team had to start working differently. Part of the
tasks from the Security department moved to the SecDevOps teams, meaning that security had to steer
the activities in a different way.
We had to understand all together that the company objectives are changing: instead of the admin-
istrative process with papers being filled in and reviewed for every change, now teams could release a
change into the production by themselves. So you need to discuss with Security which controls you need
to build into CI/CD pipeline, to still preserve the control and best practices. Security is no longer fulfilling
the operational role (it is automated), but it defines the process, the pipeline controls (e.g. definition of
firewall ports), and share the best practices (champions). Once or couple of times a year the process
should be reviewed, hackathons organised, red teaming, sharing of best practices in order to achieve an
improvement. Security department should fulfil a coaching role for the teams. They should also facilitate
the use of different tools: the simpler the tools are to use, the more they will be used.
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
The simpler are different functions and tools in use, the more tasks a single team can perform in a
controlled way. Creating a database backup or a high-available solution is a very complex task if per-
formed on a company infrastructure: you should install recks, configure servers, .. In a cloud it is much
easier, just one click and some configuration.
If you make security very complex, it will be very difficult to implement within different teams. The
trick is to make every service as simple as possible.
At NN almost every form of control is automated. You need at least two people from the team to
review the change and to approve it. Another option is to have two developers for pair programming. For
complex changes the number of “reviewers” could be increased to three or four. However, if you add
more than 2 reviewers, the experience tells that the reviewer spends less time looking at the change,
since he do not feel responsible. The rest of the controls are incorporated in the pipeline. The team as a
whole is responsible for the quality of changes.
No separate test team is foreseen. Some people in the team are more experienced in testing than
the others, but the team as a whole is responsible for testing. Testing of a big application may require a
separate testing team to support the development.
Each team is responsible for a product or a platform: Asure team, AWS team,… Each customer prod-
uct is composed of components. ASW team had one product: AWS-platform. But for each sub-component
of the platform there was a separate pipeline. If something is broken in the pipeline, the impact is limited
to one component of the product.
103
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Our teams are built around products: one team may support one or multiple products. As long as
the product exists, the team will be there to support it.
We are working on an immutable infrastructure. Once the server is deployed, it is periodically re-
deployed. If the software is redeployed, the whole component is redeployed as well, so you are always
working on the latest version.
The target is to have minimum one release a week. This approach works well for teams with high
deployment speed (my team implemented 100 changes per day), but some teams may turn patching off,
resulting into delays with patch management. Therefore, it was decided to always push patches automat-
ically.
The design is conducted now in such a way that if a failure occurs, the user is able to continue
working with minor intervention (design for failure). Minimum workflow should be always supported
and to achieve this there is a need for a new application architecture. For example, if the application is
designed to be stateless, you can easily restart without losing data: only a single user session will be im-
pacted and not all users simultaneously. For new applications, its design feature is easier to achieve, since
you have a green field and a new technology. For traditional applications, on the other hand, it is much
more complicated and the success strongly depends on the type of the application and its design. A huge
difference is that old applications were built on top of the infrastructure that was always available and
failures were addressed at the infrastructure level. In the cloud, you have to modify the application to
deal with the failure.
Not only the application architecture should be modified, but also the supporting tooling. To con-
stantly implement changes in a product, the tooling in the pipeline should be continuously modified (e.g.
key management). Changing tooling costs more energy, since these activities generally involve manual
work.
Which information security awareness, education and training is required to support the
new way of working?
Level of security knowledge across the teams was very low when we started, so a number of ses-
sions were created around secure environment on the cloud. Also documentation was written describing
how cloud systems can be used in an easy way.
In general, teams working in SecDevOps have a lot of different concerns: security, operations,
CI/CD & cloud. You need to approach it all in a slow pace. In the meanwhile, controls are put in place for
major risks to avoid failures and to create space for making (smaller) mistakes. Teams, which are allowed
to make mistakes, have proven to learn the fastest.
How to control access to the source code in the SecDevOps environment which promotes
sharing?
No one in our organisation is allowed to execute changes in products, except for the pipeline. In
CI/CD, process changes could be deployed, after the review by two members. Only read-only access rights
to production are allowed. When there is an accident, a manager could approve exceptionally a tempo-
rary access (1 day) to the production environment.
At the same time, source code is made publicly available by many teams, within the company scope.
The source code could be viewed in GIT repository and modified, but it couldn’t be deployed without a
review and the CI/CD pipeline.
104
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
With DevOps, change becomes much more frequent. Starting from one change release every 2
weeks, we now execute 100 releases of a change per day. If someone is changing a large piece of code or
documentation, it is all considered as a change, independent of the size and the impact.
In the meantime, the duration of the whole release process can be reduced to a couple of minutes.
The change is also created in a tool, called ServiceNow, our ITIL tool. It contains a link to the test results.
Some changes are major, and we are obliged to communicate them upfront to our users: via CAB,
via intern communication tooling or via the Internet. There are maximum 2 major changes per year. We
use this slower process, but with good and extended communication, to reduce the impact on the user.
We are regulated by a bank DNB, our regulator. We passed fluently through their auditing process
because every change can be linked to a modification in production and is fully authorized thanks to
CI/CD pipeline.
You need to configure CI/CD approval tools based on trends, monitoring and blocking. In the last
case, someone from Security should be involved to review what is going wrong. It can happen within a
week or two, in most of the cases, after the release in production.
We learned a lot while doing: starting from writing a lot of documentation and ending with no tra-
ditional documentation at all. Today we offer only limited online documentation: what is the service we
are offering, how can you use the service, supported security controls and how they are working. For
example, we offer data encryption and describe the code where the encryption is implemented with a
link to the technical code. This documentation can be used by users, but also by auditors and technical
people who would like to know all the details.
To conclude, the documentation is reduced a lot, compared to the original process, before
SecDevOps.
Everything in the cloud is elastic and can upsize or downsize. You should not have too much capac-
ity as well, since it comes with a significant cost. The environment should be periodically reviewed from
the point of view of capacity: optimization versus service. You should also monitor that upscaling and
downscaling does not happen too often.
Cloud helps a lot to implement SecDevOps: automatic security, capacity, etc. In the traditional en-
vironments it can be implemented but requires a lot of effort. Cloud is the enabler for automation via
simple API. On premises, you deal with a collection of diverse tooling, with no standardization, self-de-
veloped services, your own API’s, atc.
105
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
There were default tools available within the company (e.g. Splunk), but it was easier to integrate
with the tooling of the cloud provider. So teams could use cloud tools and combine them with local tools,
while monitoring could be performed across tool boundaries. DevOps teams need a lot of logging capa-
bilities and cloud services were initially not as good as local Splunk, but much easier to use, due to the
native integration in the environment.
Vendor locking is a drawback of cloud tools. You can use containers & databases to prevent locking,
but for logging & monitoring it is worthwhile to use cloud tools. The gain and the simplicity is worthwhile
and it is just too expensive for an individual company, such as NN, to develop all tools on its own.
Should the installation of software be restricted or not, in order to boost the performance?
In practice, teams were allowed to experiment with tools and they got time for that (2-3 weeks),
but knowledge sharing was obligatory. There were a number of innovative teams investigating local
tools, however, cloud native tools were a preferred option. Experimenting with tools allows to better
understand the process.
Tools were evaluated largely in the cloud environment, in a separate sandbox, without connections
to the company systems & data.
Execute checks in real-time on running systems with cloud native tooling. In the cloud you can see
all the reports in just a couple of klicks. Every team had an own dashboard, as well as the integrated
dashboard across all teams. Security teams reviewed these dashboards from time to time to evaluate the
status.
Nexus, Archsite, etc. were tools on premises with much more capabilities but they were less used
than cloud tools, because cloud tools we much easier and natively integrated into the environment.
What is the best way to segregate the network while preserving the flexibility?
In the cloud, the network set up is different from the one on premises: you have no separate fire-
walls, so you need to create a segregation yourself. We differentiated between production and non-pro-
duction environments. Teams could indicate themselves with which other devices and teams they would
like to communicate. Teams could decide themselves on the access and configuration of the access (e.g.
firewall rules). As a company, you could specify a number of general boundaries (e.g. closing outside
ports).
The role of security is changing: it builds controls in the cloud environment. Development engi-
neers make the initial proposal and discuss it with the security department. Team as a whole is now re-
sponsible for the security.
Is there a change in the role of Security within the Development Lifecycle due to SecDevOps?
Controls in the pipeline are working fine for standard things, but it is necessary to keep a hackathon
to hack into the pipeline and in this way to introduce innovative improvements, in addition to standard
controls.
Code conventions across teams are a bit chaotic: too many teams controlling conventions leads to
very strict conventions. Therefore, our old teams are working in their established way, while new teams
106
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
are more obliged to follow pre-defined standards. It results in extra work, so after the while the naming
conventions and coding conventions need to be standardized.
Involve risk department in the review of SecDevOps process, since they are responsible for pro-
cesses. If you send a risk person to SecDevOps teams just to control the process, they get zero respect
from the team, while if they are making suggestion on the hand-on tasks from the beginning, it is a totally
different story.
It is partially implemented by the pipeline, but you cannot test all the compliance requirements
(e.g. retest your code yearly), sometimes you need to rely on the documentation. Some compliance checks
become irrelevant due to the use of cloud (e.g. failover of the environment in case of failure) or just cov-
ered by the contract with the supplier.
With cloud, we rely on the configuration of the supplier, who provides the assurance. You have an
auditor who controls the cloud provider: compliance & assurance. We are working with the cascading
control frameworks: the more you reuse from cloud, the less controls you need to implement yourself.
And cloud provider guarantees that all controls are correctly implemented.
Configuration, infra, compliance, test, … is a code in GIT. The whole setup is described in the con-
figuration: load balancer, firewall, … everything is configured. If something is changed in the configura-
tion in GIT, it was automatically pushed to production after a check.
Interview 5
How are Information Security roles and responsibilities impacted by the introduction of
SecDevOps? Is there impact at all?
A cross-functional team is now in place, which shares the responsibilities, where we had a separa-
tion of duties before: developers were not allowed to go to the production and to release the code. Today
people from development even have administrative access to databases and OS’s in a production envi-
ronment. Therefore, it’s much harder to enforce policies than in pre-DevOps era.
DevOps has a notion of increased automation: in some organisations you can or cannot push to
production. Before you had security tests, which are no longer possible. It pushes the security teams to
change their approach, since gates prior to releases no longer exist.
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
The whole point of DevOps is to increase the velocity and to allow the development teams to pro-
gress faster and to minimize the issues due to inconsistences in the environment. It’s hard to argue
against it and security professionals needs to go with the flow. Don’t fight against the forces that make
DevOps popular, just find a way to proceed. Get familiar with how DevOps works, use new type of tools
& techniques.
107
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
DevOps mainly brings increase in tooling and automation. For example, it is easier to patch in time
in development lifecycle itself. Platforms can constantly be updated and patched for security. We make
sure that the container images are updated as soon as a new version is released. However, the application
may break because of the dependencies on the previous versions. The damage can be detected via auto-
matic testing as much and as often as we wish. All the environments must be exactly identical. Patching
should be done within the development environment. The patched container will be propagated in the
normal lifecycle pipeline.
Regardless of the on premises or cloud development, you still have appropriate tooling. In cloud,
you have PaaS, no need for installation & configuration of your own development environment. It boils
into Dev and Ops teams practices: if they are not disciplined, it will be the case in all environments. Ven-
dor locking is not really the issue with the cloud since vendor locking is of any time (the same situation
was in the past with the mainframe). When you make a choice for a cloud provider, you make a long time
choice, which is reflecting in the choices of monitoring or other services.
There are many interpretations of Security Architecture role: solution security architect, enterprise
security architect, oversight role, risk assessment role. The role of the security architect is important but
is changing. Strong policing, check at the gate, mandatory security insurance, red flags, all these activities
will be impacted. For example, with DevOps there will be no documented architecture. The architect role
is shifting to being someone’s consultant at critical points in the sprint. Every 2-3 sprints we need to make
security & risk assessment.
Which information security awareness, education and training is required to support the
new way of working?
Learning from other people experiences is a good idea. Training and learning approach is not some-
thing new for DevOps, it was implemented the same way before. It should be done and it is obvious. Just
a remark: large companies (banks, telecom, insurance) should not look at Spotify and Netflex to setup
their DevOps, it does not reflect their line of business. Neither of these new streaming companies is able
to run the bank. Certain things they do well should be copied but we should not ape the patterns blindly.
Exactly for this reason, many transformations fail.
You need people from different backgrounds in security, line application security background. This
person shouldn’t be just a security expert, but also know a lot about risks of software development. You
have different roles, competencies within a security role.
How to control access to the source code and production in the SecDevOps environment
which promotes sharing?
Some roles in DevOps team should have read access to production (product manager, business an-
alyst). Not everyone needs full access. Administrating Docker environment, OS, DB should be done by
only some roles. Organisations should use CyberArc or something equivalent to insure strong privilege-
based access management: only for specific request, for a certain duration of time, for certain profiles,
etc. We want to increase monitoring of access as well.
In traditional organisations we had CAB. Anything other than the emergency fix had to pass it. Agile
changes this way of working, since you need to develop incrementally, within short sprints. However, we
can’t say that there is no need for a CAB any more, it will be always there for large organisations (e.g.
108
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
banks). Developers, however, don’t need to go to CAB for minor upgrades, but for bigger decisions (switch
from Oracle to Mango DB).
The way in which we write documentation changes, there is no need any more for lengthy docu-
mentation. You have infra as a code concept, so now the documentation is an answerable script: now we
have a series of steps in a script (standard operating procedure for upgrading database, etc.) Documents
are now replaced by script and a short document explaining what the script is doing.
How we do capacity management have changes with the introduction of the cloud. I am talking
about correct sizing of production environment, based on the expected demand. Many organisations have
cloud strategy right now. You can do you development on premises or in the cloud, but it helps to be in
the cloud.
If you need to setup VMWare and virtualization environment you need to size, to have the right
amount of hardware. Before it was an expensive and risky exercises to perform. Now in the cloud, it is
not the case anymore, since hardware and infra layer are provided by the cloud provider, on incremental
cost. Risk is also not so severe if you take wrong capacity assumptions.
Event logging and monitoring do not necessarily change. However, DevOps tooling may support
automation, simplifying the process.
Application monitoring is much more mature than security monitoring today. Development teams
do not pay sufficient attention to security monitoring: unauthorized access, suspicious events, etc.
Should the installation of software be restricted or not, in order to boost the performance?
For example, developers in my team want to download components/library freely without going
to security each time. In the past you had an approved list of components. Now the approach is to allow
the download of any component, but a set of policies needs to be applied: no known vulnerabilities, com-
ponent may not be too old, there must be a large community behind the component, etc. No need for
manual checks exist any longer, since we rely on the tooling: Black Duck or SLM (Service Level Manage-
ment), or Trend Micro (automation of security within CI/CD pipeline).
The situation may very between a start-up and a bank. But there is a trade-off between standardi-
zation & flexibility. Teams in a common technology stack and common domains should standardize on
the same tooling and configuration of CI/CD, otherwise it’s a disaster even from the application develop-
ment point of view. However, teams should have flexibility to decide on architecture, as long as it meets
the basic policies.
On one hand, you may have vulnerabilities in custom developed code, because the developer has
missed something or made wrong assumptions (cross site scripting attack). On the other hand, develop-
ers also often rely on third party components or open-source frameworks, which may contain vulnera-
bilities as well. In the first category there is no difference in the time before and after the introduction of
DevOps: train developers in secure coding practices, foresee the right set of tooling to detect vulnerabil-
ities, security testing, code review, etc. The only difference is that because of the increasing emphasis on
the automation, it becomes easier to develop secure code.
109
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
If we talk about third party packages, there is an increasing occurrence of vulnerabilities. We are
talking not just about a changing set of practices, but also about new approaches, new architectures, in-
cluding third party components. You seldom build from scratch nowadays, but reuse what already exists
for authentication, application monitoring, etc. You reuse more, build less, meaning that more vulnera-
bilities arise. It requires specific attention and tooling to address the gap: Sonar, etc. Also tracking of pub-
lished, known vulnerabilities, is important, as well as regular system/application update.
What is the best way to segregate the network while preserving the flexibility?
Network segmentation is still very necessary. Network is the target and the vector of attacks. Clear
segmentation between trust zones/areas, as well as production and non-production infra is necessary.
Administrators are allowed to have local access to the machines in the non-prod environment, which is
absolutely not the case in production environment. Request for non-segmentation comes from inade-
quate mastering of tooling and methodologies.
At the same time, we are moving into virtualized networking. In containerized environments, you
need to rely on a different approach. You just cannot use the same container orchestration layer. You
should not have two separate physical layers as well: production & non-production. In cloud you can use
tenants and separate instances of containerized infrastructure. There is less need for basic network seg-
regation.
DevOps introduces more automation and self-services. Still security is hard to get right, since there
are much more things to take care of for a developer and the development needs to be finished much
faster, in a matter of weeks. Best practices should be addressed automatically to bring down the burden.
We need to find security defects much faster/earlier in the cycle. For example, static analysis was done
before in a common build environment (wait until next build to get feedback), now developers are run-
ning the analysis themselves, directly on their code. More security testing (functional and non-functional)
should be automated to be entered in the pipeline. The code should be tested more often with less effort.
Security becomes a part of usual practices in SecDevOps.
You should look for a balance between the standardization of security tooling and restricting the
developer too much. You should define standards: minimum number of stages in CI/CD, which rules you
apply within the tools, reuse of testcases from application to another / testcase to another, etc. Security
teams should not really provide a white list of the accepted components, since this goes too far. We just
need a security product, which keeps track of the component dependencies and its known vulnerabilities.
As a security team, you allow developers to choose the components they wish, which, however, follow
the set of predefined rules.
With off-shore development, the 3rd party needs to be on the same platform, with the same tooling.
Just throwing code over the wall does not work. There should be time alignment as well, since there are
more time dependencies. We should also be aware of vulnerabilities introduced by the third party.
Tooling for configuration management has changed. GitLab allows new capabilities: you can work
together on a common structure, via branching, merging, etc. Configuration management is different to-
day, compared to the past. Access control to branches is different as to compared to before, not everyone
has the same access. Not everyone can have the same access to the same code: you define groups of de-
velopers.
110
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
We have new kinds of configuration items. Before we had source code, but now you also have con-
tainer images (new config items), infrastructure as code, more complex items. So security constraints
around those becomes important: how to safe cryptographic secrets, access control, etc.
Interview 6
How are Information Security roles and responsibilities impacted by the introduction of
SecDevOps? Is there impact at all?
In a traditional team, roles are organised per competence: “four-eyes principle” or “toxic pairs” are
easily achievable with different actors in the development cycle. In the Agile context, with T-shaped
teams, everyone plays different roles simultaneously: for example, development and deployment to pro-
duction. However, Agile does not mean DevOps, which leaves a choice to teams to do, for instance, Con-
tinuous Delivery, but no Continuous Deployment. If a team chose for Continuous Deployment, it starts
doing everything end-to-end, including deployment to production.
Regarding security, a number of new roles should be defined. In Continuous Deployment pipeline
you need approvers to perform control at certain points in the pipeline. The approvers are responsible
for different actions than in waterfall: they are addressing functional & non-functional approvals.
Since Product Owner cannot judge about the security of the product due to the lack of the technical
knowledge, a Security Architect can be “plugged into” the pipeline to take the judgment on the product
security. Other elements in the pipeline may help, as well, with the security of the product: SAST, IASP,
DAST, pen testing, etc. Control decisions should be taken per product by Product Owner, based on the
balanced approach between agility, cost, speed of controls and security (kind of risk management).
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
Segregation of duties is often reviewed by audits. They would like to have assurance that develop-
ers cannot release in production. However, in Agile world, their primarily concern should be to assure
that one person cannot fraud in production. For audit, a Product Owner is always accountable for every
release. Therefore, no developer contributing to the code in a release can approve this release to produc-
tion.
We have stages in the release pipeline, which are approved by different people. There is still a kind
of CAB approach. But if developer is an administrator, he can remove approvals from the pipeline for one
or several runs. So we decided that a developer may never become a release administrator for his or any
other product. Each product has its own pipeline, but within a pipeline an administrator can use a Pow-
ershell script to fraud: he can remove one Powershell and replace it with his own one, which is working
on another product. You may try to block every release pipeline with a different Admin account, but you
need to decide how much security overhead are you willing to bear.
We decided to have one team, which does release to production and guarantees the segregation of
duties. We tried to approach it differently, but we find each time a way to manipulate the pipeline. Sepa-
rate team, however, creates a lot of fringing with some teams and has to be enforced. Normally, it should
not pose an issue: you build a release pipeline for a product and it does not change much; if you change
the release pipeline too often, then the design of the pipeline is just not stable.
When we started the transformation, the major problems we faced were a project based organisa-
tion, security (very traditional development, lots of network zoning), and data centre infrastructure.
111
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Before we had projects: an initiative of which the result is an update of one or multiple products.
In Agile context, we talk about user stories, organized in features. In theory, you do not need project
managers any more: different product owners can just agree on the actions and the timing. However, if
an epic is so large that it is not possible to align among many stakeholders, we can still use project man-
ager to align between tribe leads and product owners. In this case, project manager cannot interfere with
the product owner who defines the backlog.
In general, you have a lot of dependencies with old, badly designed products. Before, many compa-
nies were working on a project basis, where the architecture was of the secondary importance. Compa-
nies often have no idea how much a change of an old product costs. The cost of a change is also constantly
rising, but no one realizes that the agility is going back. Actually, the agility is in the architecture and a
good design. The timeline of a product is 5-10 years, during which 30% of the cost is spent on the design,
while the remaining 70% is the maintenance cost.
The advantage of the cloud is that the infrastructure is ready to be delivered in the agile mode.
Otherwise, in a local data centre a lot of things need to be done manually. Cloud is, actually, developed
with DevOps in mind.
For cloud deployment we keep the pipeline on premises. It was an idea of a cloud architect, because
software deployment in the cloud would result in a lot of security issues. The only problem is building
the right infrastructure, since you need to support on premises the technologies supported in the cloud.
The source code is always kept in the data centre. 95% of our deployment take place on premises, while
5% take place in the cloud. Mainframe stays in the company AS-IS, so the integrations with the cloud
remain an issue.
For development tools (e.g. IDE) selection there is a lot of freedom given to the developers and
operational people. Each time it can be proven that the new tool has an advantage compared to an exist-
ing one, it will be allowed for installation and use. However, the pipeline should be the same and homo-
geneous for everyone. For certain products we also do only continuous delivery with fixed test and inte-
gration environments, they go together in the same release train. Therefore, we look at technical and
functional dependencies to decide on the freedom of tooling selection.
Which information security awareness, education and training is required to support the
new way of working?
There a lot of obligatory security & compliance training, but they are very generic. If you look at
development, there are no training at all.
There is a PLC flow that describes the sort of documents that are needed for a project, security
requirements included (slightly modified for agile). Agile PLM is not attached to the project. There is more
awareness w.r.t tooling, such as SAST.
How to control access to the source code in the SecDevOps environment which promotes
sharing?
Release pipeline admins is a separate team. In release pipeline you have parameters: for each en-
vironment you have a functional user and a password required to start the deployment. Password in each
tool is encrypted.
Our source code is stored separately from work item management, but they are linked to each
other. Therefore, teams have access to their backlog (divided per project), with separate access rights for
the source code. Sometimes we still work in the project mode, because it is difficult to find correct re-
sources for each team. These temporary teams complicate access right management. Therefore, access
112
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
to source code is only granted based on permanent product teams (with organogram and decent access
management).
We need an initiative around open source management, because developers often use open source
components. In order to achieve this, there is need in means for detecting security vulnerabilities in open
source code (cannot be detected with SAST) and managing license compliance. There are often open
source components, with limited usage, for which no check is done. One option would be to use Artifac-
tory (equivalents of Nexus, BlackDuck), but the issues we face are often seen from the infrastructure &
network perspective by security people. Therefore, Artifactory-like tools are difficult to agree upon and
to manage.
It is not easy to estimate correctly the impact of the change: you can miss dependencies and the
impact. At the design phase and at the beginning of the sprint you need to review if you can release the
product without impact. Do you create a dependency with another product which was not there before?
Otherwise you’ll have an issue with the production.
When pipeline is mature enough, you can integrate control in the pipeline itself. Traditional ITIL
change management should disappear. However, ITIL is also trying to adopt to Agile with the introduc-
tion of v4. Per definition, the pipeline should be secure, so the CAB process becomes obsolete. To avoid
double use, you keep initially change management, but you link it to your pipeline. The question is: may
pipeline create a change request? Sometimes people want a ticket in the change system of their own. With
Azure DevOps, we use ServiceNow for IT service management, it supports the creation of change re-
quests.
DevOps should limit documentation to the minimum. Temporary documentation is in PBI and fea-
tures, containing all the temporary info. Permanent documentation should be decided by the team (e.g.
for newcomers, protocol, ports, technology, end user manuals). For temporary documentation use Azure
tools and the pipeline. For permanent documentation use a permanent repository, like Sharepoint.
If you work agile with product-based teams, you budget your team at fixed capacity.
Infrastructure is not yet Agile. It takes sometimes weeks to create a test environment: with user ID,
setup, data… Other companies complain that the setup takes 4 hours. Our problems are primarily with
test data and connectivity, functional user ID’s. You are as good as the weakest part of the chain.
Everything is stored in the log and nothing is thrown away. There is an exception: development
branches builds are retained only the last 30 days or the last two builds. All anomalies are detected
through logging. We have a separate build pipeline, to check if we have toxic pairs.
What is the best way to segregate the network while preserving the flexibility?
In traditional context we think about creating network zones: Dev, Test, Prod. In agile context it
does not make sense, since you no longer have people working in separate contexts, with one team bring-
ing everything to production.
113
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Today, you should still have 3 network zones: one for business production, one for infra testing and
one for the application factory (similar to business production, but for emergency release, where the
whole factory has the same criticality as the business itself). These concepts are difficult to accept by
security architects, thinking in terms of infrastructure. This should change, since today it is one of our
biggest problems. For example, as a developer I need to be on a development environment, I do a check-
in of my code in my GIT repository and GIT repository installs it in production through Azure DevOps. As
a result, I am constantly communicating between development and production. However, Security people
do not understand that I cannot create a separate release pipeline for development and for production.
Security should understand that they cannot change DevOps and Cloud principles.
Is there a change in the role of Security within the Development Lifecycle due to SecDevOps?
Security Architect should think about the customer who is going to use the infra: business people,
IT, etc. They need to address plenty of questions: one business production pipeline, connectivity, DMZ,
etc. IT customers are complicated customers, since they are T-shaped, able to go directly to production,
provide information directly to the development, etc. The whole delivery system is a jump-server. The
developer should be able to reactivate the jump, check in source code, start release pipelines, pipelines
will be pushing to source code to application factory or to production.
Extra security roles will be defined too address the new situation. You need not just infrastructure
security, but also application security, API’s security (authentication). It becomes a much larger domain,
where it is easier to transform a developer into a security application architect, than to transform a net-
work security architect into an application architect.
There is no issue with the combination of outsourcing and SecDevOps. However, being PO of an
outsourced team is not easy, while mixed teams are much easier: other contracts, other company. Inter-
nal teams are working to optimize the product, and external firm is working towards KPI. Outsourced
teams cannot really decide what you do in your released pipeline.
Everything is a code, all parameters should be stored as a code. Some parameters in technical sys-
tems cannot be accessed by developers, you can protect them in folders and sub-folders. Some parame-
ters can be stored in a pipeline, which is also versioned.
Interview 7
How are Information Security roles and responsibilities impacted by the introduction of
SecDevOps? Is there impact at all?
In small companies we work with CI/CD pipeline and continuous releases, but for a bank this way
of working is still a very long way to go. I even doubt if it can ever work, because of the compliancy, pen
testing, etc.
Development role evolves towards operations. In small Agile organisations, like Showpad, devel-
opment and operations are linked to each other. A developer may even deploy the product in production.
To preserve controls, rules are defined at the boundaries of the (cloud) infrastructure, which the devel-
oper cannot alter: exposure of new TCP service or putting management portal on the Internet are not
allowed.
114
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
At a bank, Dev and Ops are still strongly segregated. A developer often needs to launch a request to
another team in order to get his server / operations online.
Not all developers have sufficient understanding of operations, as well as not all operational people
understand the security. We should define a security champion role, the person who has interest or
knowledge of the subject. The champion gets the ownership of a number of activities: threat modelling,
security review before go life, etc. Not all developers will/can become a security expert. Champions are
often supervised by an expert.
Security team can be a police defining the rules (the case at BNP), or an advisor. The first way of
working is against the agile principles and is not scalable. For instance, static code scanning should be a
part of the work of an Agile team, a part of the tasks of a Security Champions. For the second way of
working you need new competences, in order to give advice on specific topics.
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
You have again the difference between High-Tech companies and the rest. High-Tech has no longer
segregation of duties and all the responsibilities are given to certain profiles. They have no compliance
requirements but only security requirements. Showpad, for instance, does constantly vulnerability scan-
ning in their environment and at the time something is deployed with vulnerabilities, a roll-back is im-
mediately performed in a semi-automatic way.
In you wish to preserve segregation of duties for compliancy reasons, together with DevOps, you
need to proceed to UAT environment in an autonomic way, using CI/CD compliancy checks. Only in the
last stages other teams should be involved. In this way, your agile way of working becomes Agile-Water-
fall, where your sprints are of several weeks, followed by a release every 3 months.
4-eye coding is also an often applicable principle. The traditional testing role disappears. Showpad
has a testing team, bit not everything is tested: new features well, bug fixed not.
The main advantage of SecDevOps is automation. Generally, there will be a piece of infrastructure
as a code and this is your image/baseline where you start every development from. This image should be
sufficiently secure to be used in various development efforts. Baseline should also be the same for all
environments to gain the speed. This image should be ready to move from one system to the next.
If you are in the cloud, the models and templates are already available to start deployment, but you
can still maintain the same capabilities on your own virtual machine as well. On real physical infrastruc-
ture, this option is not possible, you cannot deploy physical infrastructure as a code.
Which information security awareness, education and training is required to support the
new way of working?
It is easy to retrain developers to become operations in a technological company: teach them net-
working (how to put a new server online), risk management and awareness. Generally, they know
OWASP but not the underlying layers (below the application layer) and this is their weaknesses.
If you wish give full control to developers as a security team, you need them to be aware of the
consequences. They also should receive a hardened operational image for deployment, with no errors.
Developers should not be allowed to modify this image any more, but if they do the security team should
be immediately informed via automatic monitoring & alerting.
115
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
As organisation, you need to accept the risks and how to deal with SecDevOps from the security
perspective. Often Security department cannot keep up with this way of working. Therefore, you need to
start with risk analysis and building controls for automatic deployment to production. Example is auto-
matic server provisioning: before deployment, a safe image is created, which is checked a couple of times
a year if it is still safe and sufficiently hardened.
How to control access to the source code in the SecDevOps environment which promotes
sharing?
There are not many good implementations in place. Very important is to have extensive logging
and monitoring: any changes in the image should trigger alerts to security. Security team should decide
upfront: Where do you want to monitor as a team? What are the largest risks? How can logs be used to
demonstrate compliance?
Use privileged access rights: no specific privileges until the developer needs to do something and
can request access right elevation.
CI/CD should be a controller of the change. Every deployment should get a code review, code scan-
ning and the same hardened image, some checks should be built into the process, to prevent large risks
going into production.
Quick checklist is used for banks. Also a planning is created where features are listed to be imple-
mented the following 3 months. This is a sort of CAB planned to prioritize the features. CAB reviews and
decides on the impact. For example, if there is a data impact, CAB will request extra security testing. This
light CAB reduces the overhead so that changes do not need to pass through the official CAB.
The same approach is used at Showpad, but not for bug fixes (they go ASAP to production). One
specific sprint is planned for bug fixes. While at banks also bug fixes are coming on the feature discussion
meeting, with the release every 3 month and sprints every 2 weeks, Showpad releases once a week new
features and bug fixes are released continuously.
Capacity planning in the cloud is mainly budget planning: the more servers the more budget. Disk
space is no longer an issue, as was the case for non-cloud capacity management.
Logs are required for compliance, in scope of SecDevOps. Security needs to define clear use & abuse
cases: What are the boundaries of your DevOps team? What are your security guidelines? What is the risk
you want to take?
Should the installation of software be restricted or not, in order to boost the performance?
116
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
As long as the software installed is not going to expose extra services on the Internet, it can get the
freedom to be installed. You do not want to have development tools in Production, but if it is installed on
test systems of developers the risk is low. If you limit the freedom of developers, you will complicate the
development process.
A Developer may have a normal computer (for mails, daily work) and a virtual system for the de-
velopment (“contained container”). On the latter image the Developer will have admin rights. No admin
rights should be allowed on the other machines.
Everything is happening automatically & continuously. If you need to deploy or change something,
there is an automatic check for vulnerabilities. There should be also an automatic trigger to isolate the
infected system. To be able to work this way you must preferably work in the cloud. You can build this
functionality on premises as well but the investment will be huge (in cloud it is default build-in function-
ality). In the cloud if you activate this feature, you see automatically high risks, alerts to turn the machine
off, etc.
On premises we have an offering where the vulnerability scan is being executed once per day in the
customer’s environment. However, if our scan has passed a certain element, which has been redeployed,
it may take hours before the scan will be rerun and the vulnerability detected. Every day we rescan a
specific IP range for known vulnerabilities. This approach works only in a stable environment, because
in an unstable environment with changing IP’s and services, the delay of scanning is too large. Therefore,
we need to evolve to security as a service. Before we deed penetration testing once a day/month, today
we try to automate manual steps om the low hanging fruit.
We use Nexus and Rapid7 tools to scan infrastructure: scratch the surface of your application to
get the vulnerability out of it. The company has an online profile, IP’s where everything of this company
is published. Now you can easily setup a new environment where your security team is not aware of: one
account, one use. Our analysis of footprint scans the IP’s linked to the company, and we report if some-
thing new is found. The problem is that with a manual organisations this approach cannot scale.
What is the best way to segregate the network while preserving the flexibility?
Segregation should be preserved for security, to avoid, for example, that Admin ports can be
opened on the internet. A large risk arises when developers with limited knowledge of the security are
given unlimited access.
Is there a change in the role of Security within the Development Lifecycle due to SecDevOps?
Security Champions should be the first line of security compliance and security information. Secu-
rity Architecture should make shift left to get closer to the development.
Do not see changes, just an extra security risk. If the outsourced partner has an operational func-
tion, which allows him to deploy, you’ll have even more risk.
Continuous testing and source code review. We use source code review to perform the initial re-
view and then we go a bit deeper, with pen testing per feature, to find a balance between the speed of
deployment and control. Every compliance reporting is done in XML format so we can import it in JIRA
or OmniTracker. Within NVISO we have our own reporting based on XML, as well, to allocate it to the
correct teams as a bug fix (backlog). XML reporting is our differentiator.
117
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Infrastructure as a code: there is the v-baseline, so that everyone is using the same configuration.
Interview 8
How are Information Security roles and responsibilities impacted by the introduction of
SecDevOps? Is there impact at all?
Whenever you get a security control, you need to map it on the control owner. For every control
you are 4-5 different roles to make them work: from defining the control, to designing, to implementation,
to testing , to operation, to audit. When you apply it to DevOps, you need to remap all these different roles
and see which roles you can automate.
Once you broken your control down to make sure it is applied consistently, the role assignation
will depend on the control and on the definition of done. A dedicated security role will determine the
controls applicable and the responsibilities to be taken up within the team.
How to address the segregation of duties in SecDevOps environment where the boundaries
between different roles are faded in favour of flexibility & speed?
What is the point of the segregation of duties? Scripts, scans, checks do not allow you to bypass
control. If you need to run a complete security scan, you cannot avoid it.
You need to take a risk based approach and to integrate controls into CI/CD pipeline: do you
want to break a flow to integrate the check into the pipeline?
You need to make sure that the developer does not have control over the automation of the pipe-
line. You allow the developer to release in production only in moderate and low severity situations, using
pipeline checks: scans, validation & regression tests.
Threat modelling is a difficult one to automate. Many people think that if you do threat modelling,
you’ll be able to automate better. With SecDevOps you only have a partial view on your threat analysis.
Meanwhile, Security is a system property, so if you add one small feature it may introduce a severe secu-
rity flow. It means that each time you change something, the whole system should be reviewed.
Tests that are effective are difficult to automate. You need a balance: automate what you can, while
knowing that it only offers a limited coverage. What you can’t automate is not scalable and can be applied
on the risk-based basis.
Fast deployment depends on what you are deploying: nuclear reactor or traffic light system, or
something simple. Sometimes you do not have the luxury to deploy very fast and insecure.
Which information security awareness, education and training is required to support the
new way of working?
People are sent to their agile training, where they learn to do what is best for them, to overcome
obstacles, don’t even consult security before it comes to release time, etc. As the result, things get released
prematurely and badly tested. Gradually, people will hopefully learn to include security in their overall
planning. Security needs to get integrated into the teams to get better understanding of each other. But
118
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
at the end security experts cannot be present at 28 different sprint meetings at the same time. Plus, se-
curity architecture is not necessarily only focussing on software development, the scope is much broader.
Additionally, tools are of little help since they cannot explain the good design methods.
How to control access to the source code in the SecDevOps environment which promotes
sharing?
You need a predefined process: how you get your code through the development, release, system
integration, etc. Within this process you need to know the least privileges required. You may differentiate
between Business as Usual Case (you get the least privilege) and if you need to do soothing exceptional
(need a traceable audit record and a possibility to get elevated privileges).
Exploiting the system requires a lot of time and preparation, so you need to try to slip it into your
normal build process. This is why your build process access right should be tailored to the processes and
the flows you have to go through.
If you apply changes though the normal process, they need to go error free before moving to the
next stage. The automated tracking is acting as another party you cannot influence (second pair of eyes
is a robot).
You need to include security in team KPI’s (e.g. number of security bugs in the code). Do not incen-
tivize bad habits.
You need an approval of the major change and you determine to which level you can delegate. Make
sure the incentives are balanced when you do that. People will always take the decisions which are in
their best interest.
A lot of people have workflow tools build into the process. You would control process through those
tools which are rooting you from one stage to another. The standard documentation (architectural de-
sign) should be preserved, but can be automatically generated.
It’s one of the fundamental problems of software engineering. However, software is often seen as
a functional design, but not as a system design. System design thinking has to be communicated.
If the monitoring is real time and security operation centre sees a breach, their first concern is
about containing it. They are not looking at the root cause. They are more concerned with the immediate
behaviour. Root cause analysis does not happen at monitoring time. All the intelligence is post mortem,
unless the incident is caused by code that is misbehaving and not being properly tested.
Should the installation of software be restricted or not, in order to boost the performance?
Decision on the tooling should be made by senior developers and the juniors and mediors should
just follow. Too many tools in itself is a risk if developers leave the company.
119
Student: Maria Chtepen
Promotor: Prof. dr. Y. Bobbert
Actually, the senior developer may propose tools, but there should be a review board to approve
their use in production (e.g. for patching, known vulnerabilities). There should be a proposer justifica-
tion: replacement, better functionality, etc.
What is the best way to segregate the network while preserving the flexibility?
Segregations are there for good reasons, and the world does not turn around SecDevOps. You need
to get the balance right: not too many zones, but you also do not want to have a coconut.
Is there a change in the role of Security within the Development Lifecycle due to SecDevOps?
You need a hierarchical architecture: the chief architect and the more detailed architects. The se-
curity architecture should mirror that same structure. You need to elaborate around a single model / plan
so that security architecture is not decoupled. You need a single model with different viewpoints. By the
time it comes to software architecture, you have a plan, with security architecture fitting in it.
Baseline configuration should be totally agnostic to applications. It should be totally locked down.
However, an application may require a certain feature to be enabled. Configuration should make it pos-
sible to enable features and to deviate from the documented baseline. In order to do that a justification
of the user and the approval of the technology is required.
120