0% found this document useful (0 votes)
6 views

Final Transcript

The document introduces the security practices for CNS Level 3 training by Nokia, emphasizing the importance of cybersecurity in light of recent breaches in major companies. It outlines the course structure, which includes a curriculum and certification aimed at upskilling employees in security practices, and highlights the necessity of strong password management through tools like KeePass. Additionally, it briefly discusses the Netguard Certificate Lifecycle Manager (NCLM) for managing digital certificates to enhance security and operational efficiency.

Uploaded by

abhisheksharma8
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Final Transcript

The document introduces the security practices for CNS Level 3 training by Nokia, emphasizing the importance of cybersecurity in light of recent breaches in major companies. It outlines the course structure, which includes a curriculum and certification aimed at upskilling employees in security practices, and highlights the necessity of strong password management through tools like KeePass. Additionally, it briefly discusses the Netguard Certificate Lifecycle Manager (NCLM) for managing digital certificates to enhance security and operational efficiency.

Uploaded by

abhisheksharma8
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 201

== M00.

Intro ==

Welcome to the security practices for CNS Level 3 training brought to you by Nokia CNS KM CD.

We are building technology leaders. I am Ben Aveling, cybersecurity architect and global champion for
the security for CNS Community of practice.

This module is an introduction to the security practices for seeing this curriculum.

Why it was created, what it covers, what assessment there will be and how to go beyond the material
presented.

We'll begin by looking at why security matters.

20 minutes before I made this slide, I opened my social media to see that a $10 billion bank had just
been crypto locked.

If you were to open your browser right now and search the news or social media or the web, you
wouldn't have to look too far to find another large company or organization or country that has been
hacked.

Two weeks ago, AT&T announced that it had been breached.

Three weeks ago, Dish Network announced that it had been breached.

Three months ago, cybersecurity company LastPass revealed that it had been breached and that 25.6
million password vaults were stolen.

Depending on the strength of each vault's master password, cracking, that is opening, the password
vault might take minutes or might take years.
The attackers gained access to LastPass by hacking the home network of a LastPass engineer.

Thus, reasons of customers are worried.

They want us to promise that our products are secure.

And they want us to commit to financial penalties if our products are not secure.

Government's are worried.

If we can't persuade them that our products are secure, our customers won't be allowed to buy our
products.

Huawei has been banned from the UK, in part for reasons beyond its control.

But also, because it wouldn't address concerns about a lack of quality and a lack of transparency in its
engineering practices.

Huawei is not the only vendor to be warned that it needs to improve its processes and the quality of its
products. If it's not to be at risk of being deemed a higher risk vendor and potentially being banned from
the UK.

The UK is just one market, although a big and important one.

But what's more worrying for telecom vendors is that other countries are looking at the UK and passing
similar legislation.

We'll now look at what this course covers.

Security depends on people, processes and technology.


Nokia is committed to upskilling its people, reworking its processes, and upgrading the technology it uses
to create and deliver products to customers.

This course is part of the push to upskill employees like you.

This course includes both a curriculum that is a set of training material.

And a certification that is.

A way of showing that you understand the material covered.

Security practices for seeing this is a survey course. It introduces important cybersecurity concepts
without diving deeply into each one.

If you compare it to an external security certificate, you will find differences. Something's this course,
covers some things that this course doesn't cover when you compare it to other certificates.

The choice of topics has been weighted to match the services we most commonly provide our customers
and the issues we most commonly encounter when creating and delivering our products.

You are new to start the security.

This certificate will give you a broad understanding of what the key concepts are.

If you are experienced in some aspects of sub security.

This should still help expand your understanding of concepts you may not have had to deal with so far.

You know the case.


Completing this course and obtaining this certificate is a way of proving your understanding of this area
and your competencies.

I went for much experience you already have.

Passing this certificate demonstrates that you have the knowledge necessary to understand the sub
security issues that impact your current role. Perhaps your next role.

To be awarded the certificate, you must complete the mandatory parts of the 9 modules in this
certificate, which is about 14 hours of content in all.

You must pass the proctored exam, and you must satisfactorily complete a work assignment.

The questions on the proctored exam are intended to be of a similar difficulty to the questions in each of
the end of module quizzes.

The work assignment is intended to allow you to show your understanding that for some of the concepts
from the curriculum and defer that you can apply those concepts to your current role.

It's excuse the students at this curriculum are familiar with at least the recommended material covered
in the level one security practices for CNS training, and it's recommended that students are familiar with
the material covered in the Level 2 security practices for CNS training.

Neither will be examined in the proctored exam, unless there are also covered in the Level 3 material.

This is the survey course. It introduces a wide range of security topics without going deeply into each
one.

You are strongly encouraged to consider the optional modules.


You will also find that many units are just a single chapter which is contained in a longer course, and you
may find some of those courses either useful or interesting.

There is also extensive additional content available in Nokia Learning Hub and in LinkedIn learning.

One more please visit or join the community of Practice have knocked out IT/S4CC. You can connect with
other people. Share your experiences, share your feedback on the course, ask questions, and generally
stay up to date with our rapidly changing environment.

If you are an expert in some aspect of sub security and you've totally aced the exam.

Why not consider becoming a CNS professor?

It's a chance to share your harder and knowledge.

Help your peers and doing some recognition to find out more, please visit knock dot IT/CNS professors.

Thank you for completing this introductory module.

You are helping make Nokia and its customers safer.

KMCD is knowledge management and competency development. As experienced technology


practitioners, we aspire a continuous learning culture that helps employees across cloud and network
services collaborate and share knowledge and advance their careers. Our mission is to build global
technology leaders who will keep seeing this at the forefront, banner rapidly evolving industry.

== M00.Intro ==

Welcome to the security practices for CNS Level 3 training brought to you by Nokia CNS KM CD.

We are building technology leaders. I am Ben Aveling, cybersecurity architect and global champion for
the security for CNS Community of practice.
This module is an introduction to the security practices for seeing this curriculum.

Why it was created, what it covers, what assessment there will be and how to go beyond the material
presented.

We'll begin by looking at why security matters.

20 minutes before I made this slide, I opened my social media to see that a $10 billion bank had just
been crypto locked.

If you were to open your browser right now and search the news or social media or the web, you
wouldn't have to look too far to find another large company or organization or country that has been
hacked.

Two weeks ago, AT&T announced that it had been breached.

Three weeks ago, Dish Network announced that it had been breached.

Three months ago, cybersecurity company LastPass revealed that it had been breached and that 25.6
million password vaults were stolen.

Depending on the strength of each vault's master password, cracking, that is opening, the password
vault might take minutes or might take years.

The attackers gained access to LastPass by hacking the home network of a LastPass engineer.

Thus, reasons of customers are worried.

They want us to promise that our products are secure.


And they want us to commit to financial penalties if our products are not secure.

Government's are worried.

If we can't persuade them that our products are secure, our customers won't be allowed to buy our
products.

Huawei has been banned from the UK, in part for reasons beyond its control.

But also, because it wouldn't address concerns about a lack of quality and a lack of transparency in its
engineering practices.

Huawei is not the only vendor to be warned that it needs to improve its processes and the quality of its
products. If it's not to be at risk of being deemed a higher risk vendor and potentially being banned from
the UK.

The UK is just one market, although a big and important one.

But what's more worrying for telecom vendors is that other countries are looking at the UK and passing
similar legislation.

We'll now look at what this course covers.

Security depends on people, processes and technology.

Nokia is committed to upskilling its people, reworking its processes, and upgrading the technology it uses
to create and deliver products to customers.

This course is part of the push to upskill employees like you.

This course includes both a curriculum that is a set of training material.


And a certification that is.

A way of showing that you understand the material covered.

Security practices for seeing this is a survey course. It introduces important cybersecurity concepts
without diving deeply into each one.

If you compare it to an external security certificate, you will find differences. Something's this course,
covers some things that this course doesn't cover when you compare it to other certificates.

The choice of topics has been weighted to match the services we most commonly provide our customers
and the issues we most commonly encounter when creating and delivering our products.

You are new to start the security.

This certificate will give you a broad understanding of what the key concepts are.

If you are experienced in some aspects of sub security.

This should still help expand your understanding of concepts you may not have had to deal with so far.

You know the case.

Completing this course and obtaining this certificate is a way of proving your understanding of this area
and your competencies.

I went for much experience you already have.


Passing this certificate demonstrates that you have the knowledge necessary to understand the sub
security issues that impact your current role. Perhaps your next role.

To be awarded the certificate, you must complete the mandatory parts of the 9 modules in this
certificate, which is about 14 hours of content in all.

You must pass the proctored exam, and you must satisfactorily complete a work assignment.

The questions on the proctored exam are intended to be of a similar difficulty to the questions in each of
the end of module quizzes.

The work assignment is intended to allow you to show your understanding that for some of the concepts
from the curriculum and defer that you can apply those concepts to your current role.

It's excuse the students at this curriculum are familiar with at least the recommended material covered
in the level one security practices for CNS training, and it's recommended that students are familiar with
the material covered in the Level 2 security practices for CNS training.

Neither will be examined in the proctored exam, unless there are also covered in the Level 3 material.

This is the survey course. It introduces a wide range of security topics without going deeply into each
one.

You are strongly encouraged to consider the optional modules.

You will also find that many units are just a single chapter which is contained in a longer course, and you
may find some of those courses either useful or interesting.

There is also extensive additional content available in Nokia Learning Hub and in LinkedIn learning.
One more please visit or join the community of Practice have knocked out IT/S4CC. You can connect with
other people. Share your experiences, share your feedback on the course, ask questions, and generally
stay up to date with our rapidly changing environment.

If you are an expert in some aspect of sub security and you've totally aced the exam.

Why not consider becoming a CNS professor?

It's a chance to share your harder and knowledge.

Help your peers and doing some recognition to find out more, please visit knock dot IT/CNS professors.

Thank you for completing this introductory module.

You are helping make Nokia and its customers safer.

KMCD is knowledge management and competency development. As experienced technology


practitioners, we aspire a continuous learning culture that helps employees across cloud and network
services collaborate and share knowledge and advance their careers. Our mission is to build global
technology leaders who will keep seeing this at the forefront, banner rapidly evolving industry.

== KeePass ==

Welcome to password management with KeePass.

I am Ben Aveling, cyber security architect and global champion for security for the CNS community of
practice.

In this video, we look at using KeePass for secure password management.

In this video, we will look at:


- What is KeePass?

- How to get started with KeePass?

- How to create and add passwords into KeePass?

- How to choose strong passwords?

- And how to use the passwords that are stored in KeePass?

It's important that our passwords are strong.

It's important that we store them securely.

Choosing strong passwords is hard.

Remembering a strong password is also hard.

Therefore, Nokia is deploying KeePass to all desktops.

KeePass is a password vault.

That means that instead of having to remember lots of passwords, you only need to remember 1 master
key.

And KeePass will keep your password secure on disk, and will keep them secure when you use them,
even if you are sharing your screen.

You can and should put your existing passwords into KeePass.
You can and should use KeePass to generate strong passwords.

You probably already have KeePass installed.

Nokia started deploying KeePass in January of 2023.

If you do need to install KeePass, you can install KeePass from the software center.

Start keep software center. Search for KeePass.

Click install. Follow the prompts.

To use KeePass start KeePass from the start bar same as you would with any other application.

Choose file new.

Or the first time you use it, it will prompt you to create a new database, choose the location.

It's OK to save your KeePass database in OneDrive.

Make sure you choose a strong master password. Make sure that it's one that you can remember.

It's OK to write this down until you memorize it, as long as you store it somewhere safe.

There are options you can set, but for most people the defaults will be fine.

You'll be prompted to print an emergency sheet. It's OK to print it. It's OK not to print it.
If you print it, make sure you store it somewhere safe.

If you forget the master password, there's no good way to recover it.

To add passwords:

Click add entry or click control I.

And sort of title a username, URL as appropriate.

KeePass will offer you a strong password or you can enter your own.

If KeePass says that a password is weak, you should change it.

If an existing password has been stored somewhere insecure, delete that file and consider changing the
password.

Set any other options that you need, but for most people the defaults will usually be OK.

The best way to get a strong password is to let KeePass choose it.

Any random string of characters is stronger than a random string of words the same length.

And a random string of words is stronger than words that are chosen following any pattern.

If you need a strong password that you can type or that you can remember.

A good way is to take a dictionary.


Opened it at random. Choose a random word.

And repeat that at least four times.

Then open that a random page and take the page number and a random letter.

Join them all together, for example glamour and stop frenziedly dot chikondi dot Octagon 272J.

Change a few letters uppercase letter, change a letter to a number that matches that letter. All of these
things will make your password stronger.

Once you have your passwords in KeePass, you can get them out by using control B to copy the
username control C to copy the password.

There is an option to open a URL from inside KeePass.

KeePass does not auto populate password fields the way some other less secure Password
stores/password vaults do.

You can use control V to automatically copy the username and password into the last thing that you had
opened, but be careful with this, it doesn't always work cleanly.

Thank you for listening to this video.

You're making the world and Nokia safer.

KMCD are experienced technology practitioners. We inspire a continuous learning culture that helps
employees across cloud and network services, collaboration, knowledge and advance their careers
ambitions to build global technology leaders who will keep seeing this and Nokia at the forefront of this
rapidly evolving industry.

== nclm in 4 minutes ==
Welcome to this introduction to Netguard Certificate Lifecycle Manager, NCLM.

NCLM is Nokia solution for centralized management of digital certificates.

Certificates represent a fundamental building block of the security infrastructure of enterprises,


communication service providers and device manufacturers.

They are used for authentication of users and machines and to establish encrypted communication.

NCLM helps security and network operation teams to manage and protect the thousands of keys and
certificates throughout their organization.

Certificate Lifecycle management with NCLM eliminates service outages caused by expired certificates
and it improves operational cost through enhanced automation.

And seal MX centralized platform between certificate authorities which issue certificates and target
systems which need those certificates for encrypted communication.

NCLM supports company internal CAs as well as public CAs and the wide variety of network elements,
operating systems and applications.

The upper shot shows certificates which are about to expire.

Red color indicates that there is manual interaction needed, like blue means that those certificates will
be renewed by NCLM automatically.

The lower shot shows those certificates, which were revoked during the last three months.

NCLM manages certificates on behalf of Target Systems, so let's have a look.

At the target system inventory.


Certificates and systems are mapped to containers as red color indicates that there is something to do.
Let's look into the Berlin container.

Obviously it has two target systems.

I select the first one.

And the date this target system runs. If webserver which has an expired certificate.

Let's connect to this web server.

Rightly, my browser shows a security warning which is about this expired certificate.

Back to NCLM.

We need to enroll a new certificate for this web server. I quickly check the enroll configuration.

They deploy configuration.

And then I simply press the enroll button.

Certificate enrollment is started.

Once it is finished, the new certificate is automatically deployed on the web server and the service is
restarted.

You see.
The newly enrolled and deployed certificate.

Is added as last deployed certificate to this list.

On final check.

Be relit. The web server page and here you are problem solved.

If you want to learn more about NCLM, for instance about certificate expiration monitoring and
automatic renewal or certificate enrollment via NCLM Rest API or filter use cases, then please request a
101 gap.

== PS120-hardening ==

Welcome to this elearning on ensuring security hardening of products and systems. This micro module
should take about 12 minutes to complete.

In this micro module we will give you an introduction into the security hardening of products and
systems. In chapter one, we will learn why hardening is a necessity for Nokia product security.

We will then define what is meant by hardening covered in chapter 2.

In Chapter 3, the security responsibilities in different deployment environments are outlined.

Finally, in chapter four, we will give you an overview of why protecting firmware, hardware and software
is important.

Ready to get started? Let's begin by exploring why security hardening is necessary for our products.

Generally speaking, hardening is a process through which the attack or vulnerability surface of an
application, product, or system is reduced. This can include usage of additional equipment, tools,
services, and techniques. Hardening follows the complete product life cycle. It starts with proper design
and implementation, continues with configuration and integration, operation and maintenance, and
might need to include the end of life decommissioning processes as well.

In DFSEC, when we talk about security hardening, we refer to activities that aim to harden or tighten the
system by ensuring proper configuration of components of the product.

Hardening is achieved by implementing security controls that reduce the attack surface of the products,
which in turn reduces possible threats.

Some examples of attack surface reduction include removing unnecessary services, implementing
thorough input validation, and controlling allowed network traffic.

With these activities, our goal is to protect products and network integration against security threats.

Generally speaking, hardening is a process through which the attack or vulnerability surface of an
application, product, or system is reduced.

We want our customers to have full confidence in our products. Today's customers are very security
conscious and aware of possible security impacts if hardening is not implemented in the products they
purchase.

General vulnerability scanners scan the products and networks for known vulnerabilities and
configuration issues.

Lack of proper hardening will be immediately noticed by our customers by running any of these basic
scanners.

We can of course influence the security of anything that we have developed in the House. However,
when it comes to external components that are not developed by Nokia, we should do our utmost to
minimize vulnerabilities and ensure that everything is properly configured and patched.
This is where hardening comes in to keep our product secure. Hardening activities are conducted on all
system components. Everything from operating systems and services through middlewares and
databases to applications.

Now that you have a grasp of why hardening is imperative for Nokia product security, let's get a more in
depth definition of hardening.

What exactly do we mean when we talk about hardening, even with carefully planned architecture and
thoroughly verified secure code, the resulting product will likely be insecure even in combination with
insecure or unhardened components. (?)

Therefore, it is crucial to integrate and configure all external third party components so that they cannot
be misused, manipulated, exploited, crashed, hijacked, injected with malicious code and so on by an
unauthorized attacker.

In DFSEC hardening refers to the proper configuration of third party components.

This means that operating systems and any other third party components only have the services enabled
that are needed.

If this is possible as the first line of defense, you should implement controlling and limiting inbound and
outbound network traffic so that only expected traffic is allowed.

Operating systems and other third party components are also in a secure configuration.

Creating a secure product requires hardening the operating system as well as other embedded and used
components such as disabling unused ports or services.

When developing a product, hardening must be done as much as possible during research and
development. This way we ensure that the products we deliver are hardened by default.

We must ensure that the product requires minimum hardening activities in the installation phase as
much as possible.
However, some hardening it still required at the installation and commissioning phase.

We must ensure that these activities are as automated as possible and cannot be easily circumvented by
a careless person at the installation or integration phase.

We must also ensure that all required steps are documented in customer documentation as clearly as
possible.

Let's see what you've learned so far. Take a moment to read through the question and select the correct
answer.

Let's familiarize ourselves with additional responsibilities of the product that, though not hardening
related, still play an important role in product security. These considerations contribute to the overall
security of a deployment as they evaluate the deployment model and direct the operators to harden the
environment the product is used in.

Therefore, it is important that Nokia products clearly describe the environment related expectations and
responsibilities of an operator related to the environment the product is integrated into.

Because the deployment environment of our Nokia product has an impact on the hardening we
implement, let's briefly consider the depicted typical or example deployment environments. We have
three types of deployment environments, the traditional bare metal as well as virtualized in private
cloud and virtualized in public, cloud security requirements and implementation methods depend on the
type of deployment environment, so they are good to keep in mind.

In a bare metal scenario, Nokia is responsible for everything in terms of security.

In contrast, Nokia may only be delivering the application in cloud deployment scenarios, for example.

As a rule of thumb, however, we are responsible for everything that is delivered as part of the Nokia
product, not just our own code. We must also document all the security requirements against the
virtualization environment, even when this is a non Nokia environment, we must be able to define the
precise security requirements required by our products to ensure secure deployment.
Finally, let's explore the ways we protect hardware, firmware, and software in our products.

You're Nokia. We must take protecting hardware, firmware and software very seriously. After all, we
deliver components for critical national and private infrastructures, which significantly raises the stakes
for security in everything we develop and deliver.

Hardware, firmware, and software must all be protected from tampering in order to ensure that the
products we develop are still secure when they arrive at the customer and that they stay secure
throughout their life cycle.

The failure in security could also result in everything from financial damage, bad press, damage to critical
infrastructure, breach of confidentiality, theft of identity, to name just a few.

Let's unpack. Put a security attack can look like referring to the CIA Triad, a model used for the
development of security policies. Security consists of confidentiality, integrity, and availability.

A security attack is an attack on any of these and we intend to protect these objectives by secure design,
implementation and hardening.

Click through the boxes to discover more about different types of attacks.

Gaining unauthorized access to data is an example of an attack on confidentiality. Attackers may also use
data to move laterally from one compromise system to another.

When attackers tried to tamper with devices, this is considered an attack on integrity. For example,
attackers may attempt to spread malicious hardware, firmware, or software by infecting them before
they're deployed to customers.

An attack on availability can come in the form of a denial of service attack. In such an attack, resources or
services become unavailable to legitimate users, and attackers deny or degrade the normal functionality
of a system.
Now let's pause and think for a moment. What can an attack on security look like in practice? Drag and
drop the terms to match with their corresponding attacks.

You have now completed all chapters of this module. Before you go, let's recap the key points that we
have learned.

In this module, we have learned that third party components are hardened to ensure they are secure.
Hardening activities must be done, especially during research and development, but also during the
installation and commissioning phase.

Additionally, we were reminded that Nokia is responsible for everything that is delivered as part of a
product.

It's also important to note that all Nokia developed features are also equally important and are covered
by other design for security activities.

Lastly, we now have an idea of the importance of protecting hardware. Firmware and software has a
failure to do so could be detrimental.

Thank you for your time. You may now close this module.

== PS121-secure-deployment ==

Welcome to this e learning on secure deployment and architecture in different deployment scenarios.
This micro module should take about 12 minutes to complete.

This micro module consists of two chapters. We will learn about the special characteristics of security
requirements in chapter one.

In chapter two, we will consider the role of proper customer configuration in ensuring the security of
Nokia products.

Let's begin by getting to know security requirements special characteristics.


In this chapter, we will explore the special characteristics of security requirements for bare metal
devices, virtualization, and containers, as well As for a cloud environment.

Depending on where the product is deployed, there are additional security measures that need to be
implemented.

The principle of minimized surfaces and interfaces as well as proper configuration are however the same
for any kind of product.

Begin by clicking bare metal device security.

Let's begin by considering the security and hardening requirements for bare metal devices.

Any embedded or Internet of Things type device that could be physically accessible to attackers should
be hardened from a physical standpoint to give an example, internal device components or interfaces
can be used to physically access the device and bypass security controls unless the device has been
properly hardened and secured.

To learn more about controls typically used in bare metal devices from a security controls point of view,
click the boxes.

To continue learning about the special characteristics, click virtualization and container security.

Next, we will learn more about virtualization and container security.

Virtualization relies on software to simulate hardware functionality and create a virtual computer system
or runtime environment.

This enables IT organizations to run more than one virtual system, multiple operating systems and
applications on a single server. It also saves costs through server consolidation as well as increased
efficiency.
On the flip side, it introduces a new attack surface for the virtual services as a single host system can
hold multiple virtual instances.

Let's breakdown what to take into consideration with hardening in virtualized environments. The
virtualizing layers, for example, the host operating system or hypervisor container engine need to be
hardened and configured properly as well.

We need to keep in mind that other virtual machines, containers, and applications can run on the same
physical machine.

To give an example of this, in a container-based virtualization system, the kernel is shared with other
containers and the host and the separation is loose. In case the kernel gets compromised, the impact
would be bigger, affecting many more virtual instances or applications. In comparison, in a hypervisor
based virtualization, there are separate kernels for each virtual machine. In this case the impact of the
exploit in the guest operating system kernel would be smaller.

In some special cases, sharing hardware is also not acceptable. For example, in the case of some critical
applications.

In such a case, an attack like the Rowhammer attack which exploits a bug in dynamic random access
memory modules could cause a critical data breach.

In such an attack, an attacker exploits leaking or changing the contents of nearby memory.

To avoid a data breach or deny them service, critical applications cannot share hardware with other
applications.

Another use case would be a denial of service. If any other application than the critical application has
gained access to a hardware security module.

Finally, let's consider virtualized network functions and the network security issues within them. The
virtual machines are often connected to an overlay network and have no information about the
underlying physical network infrastructure.
The Inter virtual machine traffic can pass through external physical switches or routers without the
connected virtual machines notice.

For this reason, intra virtual machine communication should be encrypted.

Thus, network separation needs to be configured on the virtualization layer.

Lastly, let's take a look at cloud security.

So what types of considerations should be made when utilizing a cloud environment for deployment?

Firstly, in the cloud hardware, network and management are often shared with other applications and
tenants introducing more risks.

In case of a public cloud, your data could be exposed to the public Internet. In addition, access controls
must be configured properly.

We will now consider the shared responsibility model of security in the cloud. It's important to
understand that there is a boundary between the product side and the customer side of the cloud.

Especially if the infrastructure is provided by a service and not as part of the product, this boundary
needs to be diligently defined.

Defining the boundary is the responsibility of the product team and it must be stated in the customer
documentation.

The responsibility of the product is only the security within the cloud. If the underlying infrastructure is
provided by the customer or a third party company.

To conclude, our look at cloud security, let's consider how the cloud effects security hardening and what
security tools the cloud provides that can be utilized.
Basic settings and security tools are tools and features of the cloud infrastructure that could be used to
harden the product they need to be configured separately on the cloud management interfaces, but not
on the product itself.

There are also advanced features available such as authentication Service, Central logging and SIM.
(SIEM?)

Next, let's acquaint ourselves with the importance of customer configuration for security.

In the previous chapter, we considered how we had Nokia can direct the impact the security of our
products through security, hardening the security of our products is however also dependent on how
customers can figure them. So how can we better ensure their security once they're at our customers
end?

Mistakes can be made during the customer installation when Nokia's instructions and manuals are not
followed in detail.

To avoid these errors, we should take a few factors into account when designing the configurability of
the product.

Firstly, we should ensure that the default configuration is secure. Secondly, if any relaxation to the
configuration is required, it must be done during commissioning.

In other words, the configuration should never be insecure by default.

Let's take a look at an example. We must use SSH version 2 protocol by default and not allow usage of
SSH version one at all as SSH version one protocol is not considered secure.

We must also ensure that only strong encryption algorithms are enabled in the SSH version two
interface.
The strong cipher is in SSH. Version two provide proper encryption as well as proper key lengths, which
ensures that they are not breakable.

If there is an exception in which weak ciphers need to be enabled or TELNET would be needed, these
must be done during commissioning.

So how can we better ensure the security of our product after commissioning? It is important to build in
the means to automatically verify that hardening is done properly.

This way we minimize the manual configuration effort and thus minimize human error made during
installation and commissioning.

The good verification script identifies areas that have not been commissioned correctly.

This ensures to both our customers and to Nokia that the products are properly installed and configured.

Many of the security findings from customer audits are actually related to the improper installation and
configuration of our products.

Now it's your turn. Take a look at these statements and indicate whether they are true or false. Confirm
your answer by selecting submit.

You have now completed all chapters of this micro module on hardening systems. Before you go, let's
quickly revise the most important points we've learned.

We've seen that the hardening of systems needs to take into account the special characteristics of
security requirements for bare metal device security, virtualization and container security, as well as
cloud security.

For bare metal device Security, we learned that any embedded or Internet of Things device type device
that could be physically accessed should be hardened from a physical standpoint.
Regarding virtualization or container security, we now know that virtualizing layers need to be hardened
and configured properly, and other virtual machines, containers and applications can run on the same
physical machine.

We also learned that the shared responsibility model for security is important for conceptualizing the
boundary of responsibility between the product and customer side of the cloud.

Finally, to avoid mistakes during the customer installation of a product, we need to ensure that the
default configuration is secure.

Thank you for your time. You may now close this module.

== ps122.txt ==

Welcome to this elearning on integrity, protection of software and hardware. This micro module should
take about 10 minutes to complete.

This micro module consists of four chapters intended to outline what integrity protection is all about, as
well as the types of threats it tackles. In chapter one, we will learn why protecting the integrity of our
products is essential for Nokia.

We will then explore how software integrity may be compromised in Chapter 2.

In Chapter 3, the ways of protecting the integrity of software are outlined.

Finally, in chapter four, we will learn how to protect the integrity of our hardware.

To start, let's get to know why protecting the integrity of our products is necessary.

At Nokia, we work diligently to protect the logic and the data of our products. Software Integrity
protection is a necessary step to tamper proof our products and to ensure a high level of security.
In essence, integrity protection ensures that the code Nokia provides is running in our products
wherever they are used, and that the code is not being manipulated in any way.

It is important that we protect the integrity of the software in our physical devices that are shipped all
over the world. In addition, we must ensure that the integrity of our software running in a public cloud
or operators private clouds is also intact.

The cloud environment is dynamic and fast-paced by nature, making it even more important for us to be
able to automatically validate in real time that the integrity of the software matches the expectation.

We can achieve product integrity by preventing incidents like injection of malicious code or the
deliberate insertion of backdoor surveillance tools or wiretapping devices into our products. These are
all legitimate threats that every manufacturer and supplier of network devices faces, including Nokia.

In fact, there are some well known examples of intelligence agencies tampering with network devices by
intercepting them and installing malware to be able to exploit the device later.

For this reason, integrity protection is so vital. Without it, there is no way of knowing if someone has
tampered with the device.

Now it's time to take a closer look at some of the ways software integrity may be compromised.

The integrity of a software can be compromised anywhere in the supply chain. Typically attacks occur
when software is modified, configured, or managed during the software life cycle.

Software can be manipulated, for example by rogue engineer in the software development environment
by an employee in the delivery pipeline, or even by an administrator in the operator network. For this
reason, we must ensure that our products are protected on all fronts.

Bad actors have a variety of strategies and techniques at their disposal to compromise the integrity of
software. They can, for example, install rootkits and backdoors. They might also be looking to disable
security monitoring or to inject malicious functionalities in the software.
Without integrity protection, these changes can go unnoticed for a long period of time.

You can learn more by clicking the icons.

In case the integrity of a software is not properly protected, even a formally secure product can be
compromised.

This may happen, for example, when we deliver updates. If the software doesn't sufficiently verify the
integrity of the update or doesn't properly authenticate the system providing the update.

The product may also be compromised if the software fails to ensure that the data has not been
intercepted in transit.

In all, the previously described instances, there is a risk of malicious code being installed in a Nokia
product that in turn could result in everything from back doors being installed to end user information
ending up in the wrong hands to financial and reputational sanctions for Nokia.

In the worst case scenario, tampering with Nokia products could constitute a threat to national security.

That is why protecting the integrity of our products is paramount.

Let's see what you've learned so far. Take a moment to read through the question and select the correct
answer.

Now that we are familiar with the different ways the integrity of our software may be compromised, it's
time to find out what we can do to protect it.

While Nokia cannot make tampering impossible, we can make it as difficult and time consuming as
possible.

First and foremost, we need to digitally sign and encrypt our software and firmware.
When the code is signed by the software author, we can confirm that it's authentic and has not been
altered after it was signed once it's running.

We can also securely manage our products using services like software integrity protection or SWIP.

The Nokia SWIP is a service that Nokia provides to protect the integrity of our software.

Currently it's being used to protect the integrity of our base stations, as shown in this example use case.

The service works like so:

The Central Nokia Software signing server signs binary sent by authorized software build servers.

This service is provided by the Software Integrity Protection Service or SWIP.

Our products have built in software that then verifies the digital signature of every binary they load. In
other words, the product self tests that the software it loads during boot up is authentic and intact.

To find out more, click the buttons on screen.

We must also ensure we are always retrieving and delivering updates through secure means. What's
more, in order to mitigate the risk that any malicious code has found its way into Nokia products or
software and firmware should be scanned for malware before delivery.

To ensure our software stays safe, we must also utilize hardware based security solutions, especially
when it comes to cryptographic operations. An example of this is the use of trusted platform module, or
TPM for securely storing keys used for authentication.

In essence, trusted modules are specialized chips in charge of storing secrets and protecting the product
from attackers.
They are typically used to store encryption keys for facilitating storage media encryption.

When we integrate security solutions to the hardware, we can provide a higher level of security than
with software alone and better protect our products from hackers, malware and thieves.

Now let's pause and think for a moment. There are many security measures that Nokia is taking to
ensure the integrity of its software. Can you match the different measures with their functions? Drag and
drop the items to the appropriate location.

In addition to protecting our software, we must be able to ensure the integrity of our physical devices as
well. The final chapter of this micro module discusses how we can do just that.

Every customer, premise, equipment or CPE, as well as all other devices located in publicly accessible
areas where possible attacker could gain access to them, must be properly protected from a hardware
security standpoint.

Simply put, this means physically securing the equipment to prevent attackers from altering its logic or
inserting or replacing any of its components.

Physical access to all interfaces that provide access to sensitive device functionality should also be
restricted. This applies in particular to interfaces such as JG.

All CPE should also automatically detect any unauthorized insertion or deletion of hardware components
and generate an alarm or log entry if such an event occurs. What's more, they should utilize built-in self
test or BIST to ensure the integrity of embedded memory contents.

Using encryption, the equipment should also restrict the access to physically removed storage media and
prevent the startup of the device utilizing a replaced boot media.

Depending on the criticality of the stored data, the equipment can raise the verification credentials and
other possible secrets upon tamper detection, thus ensuring the confidentiality of data.
You have now completed all chapters in this micro module. Before you go, let's quickly go over the key
points we've learned.

In this module, we have seen that integrity protection is a necessary step to tamper proof our products
and ensure the code Nokia has provided has not been manipulated in any way.

We also learned that Nokia can protect the integrity of its software with digital signatures, software
integrity protection services, malware scanning, as well as hardware based security solutions.

And finally, we saw that in order to keep our hardware safe, we must restrict physical access to customer
premise equipment and utilize tamper detection.

Thank you for your time. You may now close this module.

== sample hardening plan ==

https://nokialearn.csod.com/ui/lms-learning-details/app/course/b2f2d84b-f5a7-476e-a339-
37b3f40dab92

We will now talk about the R CH-0383 security hardening verification future and in CR17.

The feature shall evaluate the current hardening state of in CR and documented. The gap analysis is
made based on evaluation results. Each gap is analyzed if it can be fixed by our organization or if it needs
to be fixed by WindRiver.

The security hardening tasks in this release are:

Analyzing SH CC requirements and selecting limited set of requirements to be tested based on gaps that
were detected and in CR16A creating selected test case definitions to quality center. Creating automatic
test cases for each selected test case. Executing automated test cases.

Executing manual test cases in case automation is not possible or feasible, this step is optional.
Evaluating results of the security testing tools, studying openness CAP testing tool which is optional.
The selected test cases are networking. I see MP IP protocol version for services.

We need to identify required services, verify open ports, privilege and protocols. Titles should be
enforced. Disable the remote root login encryption algorithms and key lengths, remove the built in keys
and certificates.

For the Linux OS.

There should be no dev tools, world writable files and authorized as suid sgid on executables. World
writable directories, default unmask, mount options, orphan files and user management.

Unused user accounts or groups. Clear text credentials, roles and account lockout for logging. Identifying
data to be logged, and the protection of log files.

In virtualization and hypervisor.

Quotas, networking, access control drivers, Mac addresses.

The QC status is updated by the feature team based on the results of the test case. The following actions
are made by the security leads based on the QC status. The security hardening checklist is updated, the
security hardening specification is updated for each failed test case and gap analysis is made.

I take it to Wind River is logged for every gap that cannot be corrected by Nokia.

The security content automation protocol as CAP is a method for using specific standards to enable the
automated vulnerability management measurement and policy compliance evaluation of systems
deployed in an organization. The SCAP checklist standardize and enable automation of the linkage
between computer security configurations and the NIST special Publication 853 Controls framework.

The open SCAP provides.


A wide variety of hardening guides and configuration baselines developed by the open source
community and ensures that you can choose a security policy which best suits the needs of your
organization regardless of its size.

The Scap work bench is a graphical utility that offers an easy way to perform common no skept tasks.

This tool allows users to.

Performed configuration and vulnerability scans on a single local, or remote system.

Perform remediation of the system in accordance with the given X CDF 4 SDS file. The workbench can
generate reports in multiple formats containing the results of a system scan. It allows you to modify an
ex CC DF profile in an easy way without changing the respective EXT CDF file. The tool provides a
graphical way to enable or disable XC CDF elements.

Your changes can be stored as an XKCD GDF tailoring file.

== secure-email ==

Alright. Thank you. So good morning and good day to everybody. So this is about the information
security classification. So this is a policy that has existed in Nokia for a good few years.

The important thing is that any official Nokia documents or any any confidential information be correctly
classified, so the the levels are Nokia internal use which can be shared with externals with customers or
vendors providing there's an NDA with the individual or there might be an NDA with the company.

Like confidential information is is more limited distribution, so it might just be just within a group just
within a department, whereas secret information is for named recipients only. The real Crown jewels
information of the company.

So in order to help you implement this this policy, IT has deployed the AIP toolbar, so it's been out now
for about five years. But we've made a few changes recently, so thought was worthwhile to present this
topic again.
Just to remind you the the toolbar is is is what you see in office applications.

Uh, such as Outlook, Word, Excel, PowerPoint and it offers you the the chance to to classify the
document in one of the three standard Nokia information classifications. It also offers the capability to
classify document as public or personal. So that's basically saying none of the three classifications apply.
It's something different.

Now public means something that it really is public. If you put that classification on a document, you are
telling the recipients so they can post this to Facebook or it can be in the newspaper. It could be on an
external website. So this is something that has been cleared by the marketing or Communications
department to say it really is public information.

If you select Confidential or secret, then the document is encrypted and it's encrypted in in it convenient
ways such that anybody who has the right to view the document, it will automatically be decrypted for
them by the Office applications, Outlook, PowerPoint, etcetera.

So just to go over these these classification levels, now first I'll quickly show you in PowerPoint itself, you
see this this particular presentation is classified Nokia internal use. So if I just click the edit pen there,
you can see the various options that I could have classified it with. So with confidential and secret, there
are sub labels. So there's a drop down menu here and you could select one of those three different
options.

So now to to explain what those labels mean.

So personally something that's not related with business. So you know we understand, especially if
someone is traveling with only a company laptop that they may want to write some private e-mail or do
some personal banking or even write their shopping list. So if there's a document that they want to
classify as personal, that option is there.

So public cause, I said, is, is something where you're telling the recipient this is completely open
information.

Nokia internal use means that it's information A used by Nokia for Nokia business and it should not be
disclosed outside of the company unless there's a specific business reason and there should be an NDA
either with the individual or with the company.
No confidential and secret as a as I mentioned earlier, these two forms are encrypted, but anybody with
a Nokia identity can sign in to word or PowerPoint or whatever and automatically get the keys to decrypt
the document. So if they're entitled to do so.

So if you just select confidential then you get the default sub label which is employees or externals
employees and externals now that means anybody withanokia.com or an ext.nokia.com e-mail address
orextaddnokia.com.

So.

You can also specify the document is read only, so a read only document means the recipient cannot edit
the document or you can specify a custom.

Now Costa means that only the recipients of the documents can decrypt it, so if it's e-mail, only the
people listed in the in the e-mail recipients can can read our forward. The can read the the e-mail or if
it's in the document you can specify. I'll show you that on the next the next screen.

And the difference with with secret, apart from the fact that it's it's classified as secret by the the Nokia
Information Security classification policy. But the difference in the implementation of AIP is that only the
owner of the document can reclassify it.

So with confidential, if you receive a document or an e-mail and you have the right to edit it, then you
also have the right to change the label so so for example, if something is no longer secret because a new
product has already been launched and the details are no longer classified as confidential and you want
to send them out to a customer, then if you have the right to edit the document, you have the right to
change the label. But as I said, with secret, only the owner can reclassify it. So it's a way of keeping more
control over the distribution of the information.

No.

There is, there is no. There is no sorry, there is no default label.


For e-mail, it's OK to not apply a label. You know if if the e-mail doesn't have any Nokia proprietary
information, there's no need to classify it unless you want to specifically say it's personal or public, so it's
OK to leave it blank. But check with your line manager because some business groups have a policy
where everything should be classified.

For a document, however, it's it's uh, it's better to always classify the document correctly.

So with documents you can apply a sub label. So if you select the sub label custom sub label of
confidential. So if you select confidential custom or secret custom then in a document you get a pop up
menu like this.

Where you can specify an external if you wish. So can either be an internal, a list of internal people
where the distribution is tightly controlled to only those people, or you can specify an external e-mail
address.

You can also select the permissions you want to give the recipients, so you can for example restrict them
so that they cannot edit so it's view only, so they cannot edit it or print it. And if they're not allowed to
print, they're also not allowed to take screenshots.

You can also set an expiry date for the document.

Now AIP labels are designed to prevent accidental leakage of information. So if an e-mail gets forwarded
inappropriately, or if it gets copied onto a USB USB stick and lost, then it will protect the information. It's
not really intended to protect the information against a targeted cyber attack or or a well funded enemy,
but it does stop a lot of accidental information leaks, which is an important feature.

However, you should only send a IPA encrypted documents.

To a non Nokia e-mail address. If you are sure that they are able to interpret the document. So once
again the confidential and secret label documents are encrypted so external recipients will not be able to
open them unless they are specifically listed in the custom e-mail address and they're able to handle AIP.
So if they're using Office 365 themselves, they can handle it no problem. But it's always better to ask in
advance, so don't send.
On AIP encrypted document to a customer. If it's an important document, best to ask them first if they're
using Office 365 and send them a a test e-mail or a test document so they can verify it just as a reminder,
AIP it's a Microsoft proprietary format. It's a very convenient means of encryption for inside the
company would be very careful sending it to externals.

Now, if if you do send it to an external. That they get say for example, this is a a Word document that I
shared with my test account here the air mode.com is a test domain that it use. So again depending on
the permissions you would set in the document they have the permission to view or edit or whatever so
they can see what controls they have and they just get a warning message saying that permission is
restricted only specifies users can access and they can see what permissions they have. So we use this
for example in exchanging documents with Microsoft because obviously there.

Well able to support their own technology and there are a few of our customers who do the same but
always check first. But for inside the company, everybody in the company with an Office 365 account or
an NSN inter account can use AIP.

Now if you send an AIP protected document to a customer who cannot handle IP, all they'll see when
they tried to open the document is that they get an error message like this.

So if you want to share confidential information with someone who is external to Nokia and and there is
a a proper NDA set up to allow you to do so.

Then.

Only unless they confirm that they can handle AIP, we recommend that you use protect with password.
So this is a feature that's in Word, Excel, PowerPoint. All of the office.

The document creation programs and if you select the protect with password it does use strong AES 256
standard standard encryption.

Now, if you have accidentally encrypted a document with AIP and you can't get access, we've seen this
very happen, for example, where there was an intern in the HR department who was using this
technology, and when they left Nokia afterwards, there were some documents that people could not
open. So in that case, if it's an emergency, you can ask it cyber security to assist you there. But this is not
a standard service. It's mainly meant as an emergency service disaster recovery service. So the first
option is always to ask the author of the document to change it.

So a new feature that's been introduced recently is that you can now edit AIP.

Confidential and secret documents using the the portal on SharePoint so you can click and directly edit
the documents in SharePoint. That's a new feature was introduced in the last few months.

It's also available if you're using a Linux PC, so if you're coming from Linux or some other PC that maybe
in a lab that's not uh NSN intra joint, you can edit such documents. If you can't edit them on the PC, you
can edit them on the portal. If you have the appropriate credentials to sign in.

You can also protect other file types with AIP. So in the the Windows File Explorer you can right click so
so when when you right click on a file and select classify and protect. So that invokes the AIP client. Now
you can use this very conveniently with images and with PDF files because when you protect them the
application will be able to open them. So you just have to click on the file and open them with the
normal application. But for other file types.

Yeah. So you've got a text file or or some some.

File that works with some other application.

You can right click on it and and and class and protect it as a dot P file. The difference there is if you just
click on the dot P file you only open up the AIP client where you can decrypt the file. That there's no way
to automatically feed it into an application unless the application is aware of the IP.

Now the applications that are aware of of AIP well as the the image, the JPEG and PNG files and also PDF.
PDF is supported natively by Adobe Acrobat. It's also supported in the AIP client. There's a PDF viewer in
case you don't have Acrobat installed. As I said, for other file types it will turn it into a P file which you
can whoever is able to decrypt it can click on that and decrypt it to the original file type.

And then you can feed it into whatever application it is. So if you want to send a file to somebody inside
Nokia, you can encrypt it in this way and select the, say, confidential custom. Then it can only be opened
by that recipient, or if you select confidential then it can be opened by any employee with a valid current
account. So the idea is that if the file for example gets left on a USB stick and some other third party tries
to read it, if they don't have a Nokia identity, they won't be able to open it.

AIP labels can also be scripted, so there are PowerShell scripts. So if you've got hundreds or thousands of
documents that you want to to protect in this way, but you don't have to go and open each one, you can
do it with PowerShell scripts. So contact cyber security or someone in it For more information. Or you
can ask on Yammer.

So I mentioned that if you want to send something to someone outside of the company.

I go to way to do it is to use the AES encryption that's built into Microsoft Office.

So for example, here with the Word document I I can click on file and then click on info and encrypt with
password.

So when I do that it encrypts the document with strong AES 256 standard encryption. This by the way
can also be read on Linux. So someone is sending something to a Linux user who's using LibreOffice or
Openoffice. There are programs that can decrypt such documents if and only if they have the password.

So you can share the password, for example using Office 365 message encryption. I'll talk about that in a
subsequent slide and I will advise you to use a password manager to store the passwords.

Now I mentioned earlier on about the custom sub labels and I showed you an example of doing that with
the Word document. You can also use the custom sub label with Outlook. So if you select the drop down
beside custom and beside confidential sorry and custom then only the recipients will be able to read the
information, read the e-mail and also it will be.

Marked as do not forward, so they'll see a message like this that they'll see who it was. The confirmation,
who it was that signed the e-mail with AIP and they will see a warning saying that they cannot forward it
so they can reply to it. You can send it to a group of people and each of them can reply, but they cannot
forward to an extra third party.

So I mentioned Office 365 message encryption, so this is a convenient way of doing ad hoc encryption. In
other words, there's no need to exchange secrets in advance or have certificates, so you can use this
very easily from the Outlook Web access. So go to office.com what's your browser? Open your mailbox
and when you're sending an e-mail, click on the three dots there for the extended menu and you'll see
the option to encrypt so you can select.

Or if you're using outlook or or sending e-mail from a script, you can put this exact keyword at in the
subject line so it has to include the square brackets encrypted, confidentially emailed close square
bracket. It has to be exactly like that. No spaces, no characters different. So it's a bit awkward to use it
from outlook because you have to have that exactly right, but it's very convenient if you've got a script,
say, on a Linux server that's sending emails and you want to have them encrypted, all you have to do is
add that in the subject line.

And Office 365 will automatically encrypt it.

Wen search your message is received by a A Nokia user using a OWA or outlook, it'll be automatically
decrypted for them if they're signed in with their credentials.

However, if you send such a a message to an external user, such as here, but my test Gmail account, they
will receive a message like this which tells them they have to Click to read the message. So you might
remember there was a hawk sont test that sort of tried to emulate this. Now bear in mind that it should.
The message comes from the actual Nokia mailbox, so you'll see there it's from
[email protected] and the Click to read the message that will go to office.

365.com a valid Microsoft domain. However, this is what the external user sees. If you're sending such a
an ome encrypted message to an outlook user inside, Nokia will be automatically decrypted for them.

This message automatically expires after 60 days, so this is a convenient way. For example, some of our
HR departments are using this to send information about pension to people who've already retired from
Nokia, so then no longer have a Nokia mailbox, but you can send such an encrypted message to their
Gmail or Hotmail account such that it's encrypted from prying eyes, and yet they can open it from their
account without need to have any certificate or anything in advance.

However, if your customer or government agency or whoever you're dealing with asks you for secure
standards based end to end encryption.
They're usually looking for something like S mine, so S mime is an international standard, and it's end to
end encryption. You have to have a certificate on your PC to encrypt.

The e-mail and you have to have a certificate on the recipient PC to decrypt the e-mail, but that that
means that the sender has to be has to be aware of the certificate of every recipient. So every recipient
has to get a certificate and the certificate either has to be in the directory or you send assigned e-mail to
people so they're aware of what your certificate is. So with smime you can select to just sign an e-mail or
you can encrypt it as well, or sign and encrypt.

So it's best not to mix AIP and it's my they they can cause confusion if someone receives an e-mail with
both set.

And the links will be provided in the in the presentation, but that's the short link to the S MIME pages
and this. This uses the knock dot it short URL which is available from inside Nokia but also from the
Internet. If you're signed in with a Nokia Office 365 account.

So unlike AIP, which is Microsoft proprietary, S MIME is an international standard, and it's available on all
operating systems. This for example, shows it being run with Thunderbird, which is available on Windows
or on Linux.

So you can see that this particular e-mail is encrypted and signed, so assigned e-mail, they'll always be a
Rosetta icon. It's the same in in Outlook and for encrypted there'll be a padlock icon and there'll be
details explained if you click on the details tab to say that the message is signed and who would signed
by. So it's a good way of proving that it's not a phishing e-mail.

So as I said before, more details are available on this short link.

So going back over the key take away points again.

The Nokia Information classification standard. It's important to protect Nokia's intellectual property, so
any document that's an official document or contains any proprietary information should be at least
classified. Nokia internal use. Now it's OK to send such a document to a partner or a customer, providing
there's a contractor and NDA in place. A business relationship with the contract.
However, if information is more confidential and should not be shown to customers or should not be
sent outside of the company, should last fight as confidential. And if it's really crown jewels information
classified as secret and then it's for named recipients only. So if you have a document you only want to
go to a dozen people or whatever market as secret, and ideally use the custom option as well so that it
can only be opened by those exact people.

So the IT AIP toolbar is to help you implement this policy using when using Microsoft Office.

So confidential or secret label encrypts the document, so remember that a customer cannot read it
unless it's specifically sent in custom mode to them. So if you have a document that's confidential and
you just forward it outside of the company, the recipients should not be able to read it.

Office 365 message encryption is a good way to do ad hoc encryption. You can send such a message to
anybody with an e-mail address.

It does not provide strong encryption because they don't have to have a certificate or some secret key to
open it, but it does confirm that they're the right recipient and that they're still able to access their
mailbox because they have to receive a one time code in order to read that e-mail, they have to click on
the red button, they get sent to one time code by Office 365 and they need that to sign in to open and
read the e-mail. So it's a good way to prevent the message from being lost by being forwarded or by
being lost on a USB stick we used to in the past.

== M02-cvss-1 Vulnerability and Risk Management ==

Welcome to this elearning on CVSS in vulnerability and risk management. This micro module should take
about 5 minutes to complete.

You can access the narration script and glossary from the menu tab and you can access supplementary
information from the Resources tab.

This short micro module consists of one chapter. After taking this module you will be able to give a high
level overview of how we tackle security in Nokia and you'll also be able to describe vulnerability related
concepts. Now let's begin.
At Nokia, we apply a DFSEC process to ensure that products deployed in customer networks adhere to
high security and privacy standards, both at an international and national level. Examples include ISO
security standards, NSA in the US, CSL in China, and GDPR in Europe.

The DFSEC activities belong to what we call proactive security measures and are mainly supported by the
DFSEC compliance tool to evaluate and guarantee compliance to these standards at various phases of
the product life cycle.

However, despite these activities, new security vulnerabilities can still appear once a product is
deployed. Sources for vulnerabilities include third party software protocol specifications and their
deployment and hardware systems.

In principle, Nokia is not the cause of the vulnerabilities, but Nokia can be held responsible for the
eventual action plans that deal with the vulnerabilities. Of course, our own software can also reveal
vulnerabilities for which we do carry final responsibility.

The main reasons for emerging vulnerabilities are that software systems become very complex in the
absence of good processes and practices, there is more likely to be insufficient testing and design or
programming weaknesses.

The intelligence and available techniques of hackers to discover and exploit vulnerabilities is increasing
dramatically and can transform a secure environment into a vulnerable one.

As well as this, the intent of hackers is becoming increasingly malicious and criminal.

To cope with this reality, we need a reactive vulnerability and risk management process as defined in
SVM and vams.

In this training course, the focus is on security vulnerabilities. First, let's define a security vulnerability
and then also construct a basic risk model by introducing several concepts and how they relate to one
another.

Although there are several definitions, we define security vulnerability as a weakness in the design,
development or operation of a system that under specific conditions, can be exploited by a threat.
A threat is an entity with often malicious intents.

If this exploit is successful, it can cause an incident that has an impact on the functioning of an
organization, business process or information system where the vulnerability exists. It can also impact
any other system that interacts with the vulnerable system.

Information security risk, then, is a measure of the extent to which a system is threatened. It also
combines the technical and business impact with the likelihood that such an exploit occurs.

In the industry, weaknesses are classified based on a cwe number and vulnerabilities on a CV number.

As an example, improper input validation is a weakness that can give rise to a vulnerability when
arbitrary commands can be executed via shell meta characters.

It is, however, beyond the scope of this course to go in further detail.

In summary, we can say that CVSS is the process and tool to analyze the technical aspect of information
security risk.

Let's check your knowledge with a quick question. Which of the following can be a source of security
vulnerabilities? Select all that apply.

Here are the key takeaways for this module. First, to cope with increasingly malicious and criminal
hacking activity, we need a reactive vulnerability and risk management process as defined in SVM and
VAMS the intelligence and available techniques of hackers to discover and exploit vulnerabilities is
increasing dramatically. To transform one secure environments into vulnerable ones.

2nd Cvss is the process and tool used to analyze the technical aspect of information security risk.

Finally, the security vulnerability is defined as a weakness in the design, development or operation of a
system that can be exploited by threat.
Thanks for your time. You may now close this module.

== m02-cvss-3-metrics ==

Welcome to this micro module on CVSS metric groups, metrics and values. This elearning should take
approximately 5 minutes.

This short micro module consists of one chapter. After taking this module you should be able to explain
the different metric groups. The meaning of each metric and the values they can take. You should also be
able to demonstrate an understanding of the acronyms used for the CVSS vector. Now let's begin.

Here you have an overview of all groups, metrics and their values for CVSS 3.0.

If you would like to see the previous specification for CVSS 2.0, you can find this here.

This is optional reading, but will help to give you context and a deeper understanding of CVSS 3.0.

In the exercises that follow, you will have the possibility to practice and become familiar with these
abbreviated labels. For now, select the four highlighted boxes to learn more.

Also, Please note that wherever you see the information button throughout this micro module, there is
optional additional information view to read on groups, metrics and values. Click on the button now to
see general guidelines for CVSS, 3.1 and VAMS related guidelines.

Let's have a look at the four elements of the exploitability group. Attack Vector describes the
environment from which an attack can be launched. The risk impact is higher when attacking from the
network is possible compared to when local access is required. This is because the number of possible
attackers and their malicious objectives are larger when the attack is launched from a network
perspective.

Attack Complexity describes the conditions beyond the attacker's control that the attacker must create,
or information that the attacker must gather to be able to launch an attack. It includes system
configuration settings and available computer resources.
The more complex and attack is, the lower the risk.

Interaction with users to create optimal conditions for exploitation are not taken into account and are
defined in the user interaction metric.

Privileges required describes the privileges and attacker must have prior to executing a successful attack.
The risk lowers when elevated privileges granted to selected users are required, simply because the
number of candidates is then lower.

User interaction determines if a vulnerability can be exploited solely by actions of the attacker, or if it
needs the often unwitting participation of a user, as often occurs in social engineering, the risk increases
when interaction with the user is not required.

In summary, we get this table with the abbreviations for metrics and values. As an example, the metric
for attack vector is a V which can take the values N for network A for adjacent, and so on.

The abbreviated description then becomes slash, AV, colon.

The Impact Group describes metrics that allow assessment of the impact with respect to confidentiality,
integrity and availability, or CIA. It refers to the impacted component, which can be identical to or
different from the vulnerable component.

As a reminder, confidentiality refers to limiting information access and disclosure to only authorized
users, as well as preventing access by or disclosure to unauthorized users.

Integrity refers to the trustworthiness and veracity of information.

Availability refers to the accessibility of the impacted component, such as a network service. For
example, the web, a database or e-mail.

This is because attacks that consume network bandwidth, processor cycles or disk space all have an
impact on the availability of a system to complete the base group metrics, we add the already
mentioned scope metric that allows us to distinguish between the vulnerable component and the
impacted component.

The acronyms for this group can be found in this table.

The temporal group describes metrics that can change over time. They influenced the risk of the
vulnerability because of the current state of available exploit techniques or applications, as described by
exploit code maturity.

The current state of available remediation, as described by remediation level.

The confidence in the report and the credibility of the known technical details of the vulnerability has
described by report confidence.

We will look at the temporal group in more detail later in this course in relation to the specific approach
we deploy in VAMS.

In most cases, the intrinsic risk of vulnerability can vary because of specific conditions that exist in the
environment where the vulnerable and impacted components reside.

The first method to customize the CVSS score uses security requirement metrics. These metrics allow us
to weigh the impact of an exploitation based on the importance of the confidentiality as well as the
integrity of the data or availability of the service in an organization or network. For example, if a
vulnerability impacts a router, the availability impact might be greater.

Modified based metrics allow us to describe the infrastructure specific interpretation of every base
metric.

They allow us to modify the intrinsic risk of a vulnerability based on specific mitigations that are already
in place or to control weaknesses that exist in the environment.

For example, a vulnerable service that is not active can lead to a decreased risk.
Of vulnerable service can also run with higher privileges than the default expected ones, which results in
an increased impact.

Lastly, a modified access vector can become local instead of a network one if the system is behind a
firewall that blocks a vulnerable SSH protocol.

The acronyms of the metrics are preceded by an M as an example, the base group metric confidentiality
impact C becomes MC and access complexity AC becomes Mac.

If a specific environmental metric is not defined, the corresponding base metric value is taken.

Click on the eye button to see optional information about the environmental requirement.

During CVD risk assessment, the most complicated activity is, of course, determining what the
appropriate values are for each metric.

Here you can see an example of a decision flow chart for a specific metric and its values. In this case the
attack complexity.

Notice that the risk increases bottom up. An overview of all other metrics can be seen in the complete a
practice risk assessment micro module.

Let's test your knowledge of groups and vectors. Drag and drop the options into the correct positions in
the table.

Here are the key learning points from this chapter. In this module we have learned that each group is
broken down into elements which are represented by an acronym. Each of these acronyms is also
followed by a value specified by a single character. Together, these acronyms and values create a vector.

Decision flow charts can help you in working out the value for each metric.

Thanks for your time. You may now close this module.
==Vulnerability scanning and penetration testing==

==Exam Prep: GIAC Security Essentials (GSEC)==

https://www.linkedin.com/learning/exam-prep-giac-security-essentials-gsec/vulnerability-scanning-and-
penetration-testing?

autoplay=true&standalone=true

Vulnerability scanning and penetration testing

- Now I've built several apps and they have to be secure, but how can I make sure?

Stay tuned to find out more.

- You're watching IT Pro TV.

- I know you're chomping at the bit for more G Sec, and well, lucky for you.

We got it coming at you.

We're going to be talking about vulnerability assessment and penetration testing and well, you know
what, Daniel, I'm just going to start here.

I've built several applications.

I say, yeah, I follow security best practices.

They're absolutely locked down.

Nothing to worry about.

However, I've started to think, I can't actually validate that or verify that.

So can you give us a rundown of like, how that process would go?

- Yeah.

And you bring up a really good scenario, at least the idea of it.

You work in a development environment, maybe you're just somebody who's building infrastructure or
you're building a bunch of network systems could be any of these things.
It doesn't matter what it is, but you're following those security best practices.

It goes along with whatever it is that you're doing.

It's like Justin, but how do I know for sure?

Well, we have to be familiar with some of these initial concepts first, before we can get into the idea of
vulnerability scanning and penetration testing properly so that we can kind of have that conversation.

One of the first things they want us to understand for the exam specifically is the idea of doing recon.

We have to kind of like survey the land as it were and see what is it that is out there.

What are the assets that are a part of my system that I need to be aware of so that I can actually test for
them.

Right?

And then of course you have to worry about protecting those as resources when it comes to do on a
vulnerability assessment or a penetration test or both.

Because again, there are production things, right?

If I have a production server that could be sensitive to maybe doing denial service attacks, so I'm doing
what they call stress testing.

If it knocks my production server off, can I handle that kind of a hit during production hours?

Especially if I'm a 24/7 company, which most companies are nowadays.

So I have to take that into account as I do my reconnaissance and I figure out I have all these machines, I
have all these hosts, I have all these resources or assets, and now I got to start categorizing them and
what needs to be protected.

And of course, that goes into also the idea of the CIA triad, right?

The old confidentiality, integrity and availability, right?

What on those resources are...

because they might be digital resources.

It may be something that's on the device itself, not just the asset, there's multiple types of assets.

So these are other things that we have to be familiarized with.

So if I have maybe something that has intellectual property or the...

we've come up with the better, stronger, faster mouse trap, we're about to launch this thing.

Well, I don't want that to get out.

I need to keep that confidential.

So that is an asset and I need to protect it accordingly, right?


If I do a vulnerability assessment, will that expose that?

Will it not?

Will a penetration test expose that?

Will it not?

Do I have the risk appetite to be able to actually test against something that might contain or have
access to that information?

And do I want my pen testers to have that?

I know they'll sign non-disclosure agreements and all that other good stuff, but you never know, right?

It starts to really focus us in onto the three big things that we need to be aware of when it comes to
these initial concepts, we've looked at recon, we looked at protecting our assets, or what they say,
resource protection, right?

And that's into vulnerabilities, threats and risks.

Where they've, you might say to yourself, they sound synonymous.

I saw it wash over Justin's face, just off camera.

He's like, it's the same thing, right?

- Yeah.

I'm like Daniel, I hate to break it to you, but...

- I just pull that at the source.

- Vulnerabilities, threats, risk danger, Will Robinson.

They're all the same.

- It does.

It does really seem to be like, it's all the same thing, but there's some minor nuance there when it comes
to cybersecurity that we need to be familiar with.

So let's start with vulnerabilities.

A vulnerability is the possibility that a threat might be able to take action against your system.

Okay.
So what I mean is let's say Justin, let's say you're building a site for me.

You got a web app cranking up for me.

And I take a look at it and I go, how do I log into this?

You say, we can just log in with admin, admin.

I go, cool.

Well, that is a vulnerability.

It is possible that someone else could guess, or brute force their way past that because it's a very
insecure thing.

Now, of course, we're still in the testing phase, but what if he accidentally patched it into the internet?

That could be a trouble, right?

So that is a, that is a vulnerability is the possibility that something may go awry.

Now that brings us to threats.

What is a threat?

A threat is something that could take advantage of said vulnerability.

So if I were someone let's say I was a hacker.

I'm a threat actor.

I've targeted Justin's organization, he's accidentally exposed this admin admin login to the Internet.

I'm scanning for his environment to see what's going on, trying to get a network map going on.

I go, "Hmm, what's this new web app they've got going.

I didn't see this before.

It's got a log-in.

Well, let me try some easy stuff, admin.

Oh, I'm in."

That's a threat, right?

I'm the threat.

I'm the one leveraging the vulnerabilities.


So hopefully that differentiation between the two there, and then we have risks.

So what is a risk?

A risk is the, not only the vulnerability, not only the threat, but the potential fallout.

If all those things did occur.

What could...

What are...

What are the scenarios I'm facing?

If something like that did happen.

And then of course, we take a look at all that and we assess that stuff and we do a risk assessment and
it's a lot of fun for everybody.

But I think with those things in mind, we are now poised to have a better conversation about
vulnerability assessments and penetration tests.

- It's one of those things where those terminologies, I tend to when, if it's just you and I talking, I use
those words interchangeably, but do know vulnerability is the capacity to be exploited.

A threat is the actual exploitation.

Right?

And then risk is, well, if someone were to exploit it, what kind of impact would it have on me financially
or in some other capacity, right?

- Correct.

Now, speaking of which, finding some of these vulnerabilities, right?

Some of these like strength and catnations.

So that would be injection attacks.

I can see those, I'm aware of this, but there's other times where I'm like, I didn't know that was there.

How do I go about finding those?

- Yeah.

Great question.
And that, that really does help move us and segway us right into vulnerability scanning because Justin is
right.

He is but one man, and even if he had a team of people, if they were manually going about it and not
that they wouldn't do that, not that that's, excuse me, you not a good practice, but that's a heavy lift.

And a lot of times what we can do is employ things like vulnerability scanners, a tool that automates that
process.

And what that will allow us to do is fire and forget, right?

I can hit a scanner.

Not that it's an exhaustive thing.

And yeah, this is just a piece of the security puzzle, but a very important one.

Right?

So what I've done is I've just, I've got a security scanner or vulnerability scanner spun up.

We can log in, take a look at a scan that I've done just to see the kind of stuff that maybe you would miss
if you were doing it manually.

All right.

So let me get logged in here.

Let me do a little bit of that here and admin, and then the password.

There we go.

Once I get logged in, I get a lovely dashboard.

So if I've run multiple scans, I can take a look at those, all the previous stuff I've done.

So I can go to something like scans, look at results, make reports, great stuff.

Right?

So here we see that from a previous scan that I've run is showing me results by CVSS, result vulnerability,
word cloud, love word clouds.

They're very, very helpful.

By class, low, medium, high, and log.

Notice, most of them are just low.

but I did have quite a few highs.

This is something that if I were to who have fired off a scan in real life for Justin's application, we would
want to definitely look into that.

And we can come down here and start seeing what those high vulnerabilities are.
I'm saying, operating system, end of life detection.

So we're going to lose support for patches and upgrades.

Here.

We have a denial of service vulnerability to the Linux system that this machine is running.

Let's see here we got XSS.

This is cross site scripting and command execution vulnerabilities.

This is cross site scripting and command execution vulnerabilities.

That's really bad.

That's one we're probably going to want to go further into and, and you can and click on these, get more
information about it.

Here was the detection installed version, product information, giving me the CVEs that are references for
this specific vulnerability.

So I'm getting that, that further information.

And all I had to do was start my scan, make sure that it was completely up to date.

As far as the day I scanned get all that I can, don't want to use old stuff, right.

Fired it off.

And now we can just look at the generated report and walk through that and start doing some
mitigations.

So vulnerability scans, super helpful with that.

It helps us also with our risk assessments, because now that we know what the vulnerabilities are, we
can start to build that equation.

Okay.

Here are the vulnerabilities.

There are these threats out in the wild that could potentially leverage these and the fallout from that
would be X.

And now we know where our company stands.

As far as risk goes, right?

Start doing proper risk management and either accept the risk or what do they call it?

Where you give the risk to somebody else, move this to a cloud service and let them handle the
operating system.
And now I don't have to worry about updates anymore.

Right now I don't have to worry about Justin's web app starts to reduce my threat landscape, right?

The attack surface, as we like to say.

So there's vulnerability scan scanning, pretty simple process.

You fire your scanner of choice off there's others called Nessus is very, very popular one.

I highly recommend it.

It's a good one.

Especially if you have a full enterprise environment, it's probably going to be one of your better bets.

If you're running a smaller shop, Open Vaz is what I'm using here.

It'll work for you.

Just fine.

It can be a very helpful scanning utility.

So don't count it out just because it's open source, but do your due diligence find a good scanner for
your environment and deploy it, employ it.

Have a good time with that.

- And this is one of those things as a developer, I always think of how can I get better?

Right?

How can our company get better?

This would become part of my automated systems.

Right?

I make a push, I've done a code review, me and Daniel get together.

And he goes, "Hey, that's yeah, that's bad." Okay.

I found one, but then also in the background, my scans are happening.

- Yeah.

- And then if my scans find something that's wrong, then it just stops.

Sends me an email, sends him a whatever.


Right.

The klaxon goes off in the office and we go, we got to go fix something by having those multi-layers
you're less likely to go, "Oh yeah, I got a buffer overflow in production, by the way, we're going out of
business.

That's that's on me."

but that's just vulnerability assessment.

You have some automated tools.

You should do some of that manually.

However, there are times where you go.

I, I found 'em.

I think I found everything.

It looks good, but we're human.

We make mistakes.

That's where penetration testing comes into play.

Right?

- Yeah.

That's exactly right.

This is going to be basically, excuse me.

Going to be your next phase into security assessments into your system.

Right.

Remember the idea here is to verify that what we're doing to secure the system is actually effective and
it's working, right?

So once we have a vulnerability assessment, that's kind of like really stage one of a proper penetration
test.

We have to understand where the vulnerabilities are.

So don't get the two confused.

Don't because that does happen from time to time.

And we're going to talk a little bit more about some confusing terminology or things that get used
interchangeably in our, in our security world here.
Vulnerability assessment is not a penetration test and a penetration test is not a vulnerability
assessment.

There are two different things.

They might have similar ideas, but with a vulnerability, I'm just finding those, those vulnerabilities so that
we can do a risk assessment or maybe we have to do it for compliance, right?

The penetration test.

I'm going to take those vulnerabilities and I'm actually going to see if we can exploit them.

Not only is it theoretically possible, but is it actually possible?

Is it practically possible?

Right.

So some things we need to know about penetration tests of the different types of penetration tests that
we might encounter.

And one of them is going to be white boxing versus black boxing, right?

There's gray boxing as well.

And other colors of under the sun or gradients of black, I guess, but those are the main ones, right?

White box and black box.

With a white box test, I have complete intimate knowledge of the system.

I have probably a login username and password, and I'm going to work as someone who has access into
the, into that world.

Whereas a black box test, I'm going to act more like a real threat actor would as coming from the outside
and trying to work their way in.

So understanding the difference between the two and not only that, but why would we do one over the
other, right.

So why would I want to do a white box test?

That doesn't seem realistic, except it is.

You say how's that well, because insider threats are real problem.

People get bribed, they get disgruntled and you have to prepare for threats that come from the inside.

Just as much as you do from threats from the outside.

And of course that black box test will give you a more realistic look into what it would be like to be
attacked from the outside.

So maybe you want to do one, the other, both, whatever your risk appetite is, as well as the, the current
state of your environment, which would be more beneficial.
All right.

Let's talk about internal versus external, which is kind of what we just mentioned there, but using those
proper terms where I have an internal threat, how do I mitigate against that?

Maybe I need to do separation of duties, start to make things a little bit more difficult for one person to
have a bunch of power inside of the company.

But we'll only know if we do that type of audit or that kind of penetration test.

So internal versus external, maybe I come in and act like I'm part of the dev team start looking at actual
code, right, doing code review, looking for places that things could hide or be x filled or sensitive
information that could be maybe sold on the black market.

And that kind of thing.

Of course, an external test as always is somebody coming from the outside and trying to work their way
in.

All right.

Now we've been using the word pen testing.

You also may have heard the term red teaming.

Justin, I'm going to throw this on you.

Let's see.

Have you heard the term red team?

- Yes.

- What is it...

How do you take that to mean?

- People who are behaving as malicious actors, toward your company, toward your assets.

- Now, would you say that seems synonymous with penetration testing.

- Kind of.
- Yeah.

- Kind of, so...

- Let, let me ask you this.

Have you heard it be used synonymously?

- Yes, I have.

- Ah, that's really what we're trying to get out here because Justin's been in the game.

He, he knows a thing or two, which is why he's here with us, lending his expertise on this topic.

Red teaming and pen testing, kind of like vulnerability assessment and penetration testing, are not the
same thing.

They tend to get used synonymously.

Right?

So let's go ahead and just kind of level that playing field for you.

Make sure you understand those minute differences.

A penetration test is where you are going to exploit some things and you're going to report those
findings, but they're going to be times when things are off limits, you're going to have to scope that
penetration test.

Do you want us to do stress testing against this production server?

No, almost always.

The answer to that question is no.

Sometimes people are brave and they, they go for the Gusto, right?

- They got misplaced confidence.

- They do.

They do.

Cause your production server goes down.


You are losing money.

So most people do not have that level of risk appetite.

But if you're on a red team, if you're doing proper red teaming, there is no scope.

Everything's in scope.

You get to act as a true advanced, persistent threat.

Your goal is by any means necessary gain access to the system, keep access to the system, ex-filtrate data
and do whatever act as a threat.

If we are not catching you, that's a problem.

And ultimately the goal of any of a vulnerability assessment a penetration test, or a red team, the goal is
to...

The blue team would be the opposite side.

These are the defenders people that are trying to, defend these systems day in and day out...

Is to...

That they win.

Ultimately they should win every time if we are treating this as a game, right?

So a red team is actually going to act just as a true threat actor would that has targeted their
organization.

Okay.

So make sure you understand the difference in that.

A couple other terms, social engineering's a good one where, Hey, I use people's...

They call it the art of human hacking or hacking humans.

I play on your cultural normalities on your personal idiosyncrasies.

Are you a very much a people person or a people pleaser?

Maybe I can get past things.

I don't even have to do any, or I send you a Phishing email, perfect example of social engineering, right?

So using those, those social norms to your advantage leveraging them so that you can gain that access,
but a social engineering campaign, very commonly done.

A lot of banks like to do this stuff.

What can you get away with?

Can you rob this bank by just a, a nice suit and a smile and a good story.
Maybe you can, I've seen it happen, right?

Let's see here.

Web app pen testing.

Justin knows about this.

He's actually had you have to go through this before, haven't you?

- Yeah, yeah.

It's a fun time.

Yeah.

- How'd that work out?

- So let me tell you this fairly aware.

Try to stay, you know, keep it in the front of my mind.

Hey, here are the common web app vulnerabilities.

Here are the ones that apply to the web app.

Injection attacks are always a concern, right?

Input, validation, and sanitization.

I was like, I got it.

I got this on lock.

I didn't.

- You missed something.

- I did.

I did.

I missed something.

It was actually a boundary issue.

I allowed non bounded input on a field.


- Seen it a thousand times, right?

- Right.

People go well, that doesn't seem like a big deal, but you could have shut down my database.

You could have caused like a, you know, memory issues on my server at the very minimum.

It could have been...

- Denial service.

- A denial service.

- Yeah.

But you may have even got things like, Hey, I somehow figured out how to do a buffer overflow and...

- Now I'm executing code.

- Yeah.

So it was an easy fix.

I actually did really well, but you're not perfect.

- Right.

- So just keep that in mind.

- That's a great point.

No, one's perfect in this game, which is why we perform these activities.

A couple others we need to be familiar with.

Well, a web app, obviously you've built a web application.


We're going to test it for things like cross site scripting or code injection or command injection, sequel
injection attacks, that kind of stuff, right?

CSRFs all of the, go look at OWASP top 10.

That's what we are looking for.

Right?

With a web application, physical tests as well.

Do I need to check the guards, gates and guns.

The things that secure the facility.

Is there a lock on a door to a sensitive area?

Is it easily bypassed?

Is the room been reinforced so that if it is actually a secure area, that it is actually secure, right?

So there are physical pen testers out there.

And the, those guys, as the recording of this episode, there were a couple of people I think in Iowa that
were arrested for performing a physical pen test.

And they had their get out of jail free card.

And guess what?

They went to jail.

Their trial is now over as of today or as of a couple days ago.

And they were acquitted.

The judge found them not guilty.

- That was one that was like a horrible set of like unfortunate circumstances.

- Worst case scenario.

Yeah.

- Where they get out of jail free card didn't actually cover.

But there was like out of sync issue and somebody didn't know and things can go horribly aray.

So if you're going to go that way...


- Get that paperwork.

- Yeah.

Make sure, double check, triple check.

Absolutely a hundred percent.

Cause jail is no fun.

- No, no.

- So I hear.

- I like to avoid it at all costs.

I hope you, hopefully you do as well.

And we're going to help you out with that.

Just a couple more network penetration tests and wireless penetration tests.

Just checking those devices, making they are secured properly, patches are installed.

There's no open ports.

There's no running services.

Wireless isn't not secured.

Right.

Wireless is secured is what it should be.

Right.

The double negative was fun to say, but there you go.

That's a penetration test, different ways in which it can be done.

There's a lot of generalists out there that can kind of do 'em all.

But there's also a lot of specialists that specialize in one or the other.
So it's up to you to figure out what you need for your environment, but make sure that you are engaging
in that so that you can make sure and know for certain whether or not your security controls are
effective.

- I tell you what, Daniel, I'm getting a little paranoid.

I got a couple apps out there.

I'm going to have to go call some people.

And so we're definitely going to have to call this one done, but definitely join us back for more G Sec, but
for now signing off for IT Pro TV.

I've been your host Justin Dennison.

- I'm Daniel Lowrie.

- And we'll see you next time.

- Thank you for watching IT Pro TV.

== M02-vulmgt-intro ==

Welcome to the introduction to secure deployment.

I am Ben Aveling, cybersecurity architect and global champion for security for the CNS community of
practice.

In this video, we'll introduce secure deployment.

We'll talk about who needs to know.

Welcome to vulnerability management, one module in the Security Practices for CNS curriculum.
It's tempting to think that if no vulnerabilities have been found after months or years of use that
software must be secure or to think that if our software is not exposed to the Internet and not exposed
to end users, then it is not at risk.

Sadly, our customers networks are vulnerable to being breached, either by someone finding a way past
the security controls or by a malicious insider.

Networks are critical infrastructure and are targeted not just by smash-and-grab criminals, but also by
stealthy nation-state actors who can spend months or years finding vulnerabilities, compromising trusted
people, and maneuvering their own people into positions of trust.

We have to assume that if our software has vulnerabilities, then our software will eventually be
compromised.

We can divide vulnerabilities into two classes, those we introduce for ourselves, and those that we
import through our software supply chain with third parties.

We look at some ways to deal with our own vulnerabilities in the Secure product development module
and in the secure deployment module.

This module focuses more on managing the vulnerabilities that are we import commodity components,
that is.

The finding, tracking and addressing your vulnerabilities that are in 3rd party components, software
libraries, modules, packages and even the OS, and the cloud infrastructure that we use in the systems
that we build and deliver.

Commodity components are continually being examined for vulnerabilities both by defenders and would
be attackers. There's a continual stream of vulnerabilities being disclosed, which is both good and bad.

It should mean that over time software becomes safer.


But it also means that we need to be continually addressing this, that have been found before they can
be exploited by bad actors.

And to maintain the trust of our customers and regulators.

If we regularly deploy software with known vulnerability or if we cannot resolve known vulnerabilities in
a reasonable period of time, we expose our customers to risk and we erode their trust in US.

Therefore, we check our products for known vulnerable components. Vulnerability scanners such as
Nessus, Jfrog X-ray and anchor.

This module contains a discussion of vulnerability scanning.

In addition to vulnerability scanners, Nokia subscribes to what it's called a threat feed, which is an
independent, consolidated list of newly discovered vulnerabilities.

We put these vulnerabilities into the vans a system VAMS is short for vulnerability management system.

VAMS tracks the components that are being used by each of our products.

And it tracks the vulnerabilities in those components from their discovery all the way through to their
resolution. It does this for every product and for every version of every product. This module contains an
introduction to VAMS.

It's Nokia policy to set target times for resolving vulnerabilities.

And not to make contractual commitments to resolving vulnerabilities within any fixed time frame.

This has traditionally been accepted by our customers, but some customers are becoming less tolerant.
Some of our customers have started running vulnerability scanners against our products and some
customers are starting to demand that we commit to timely resolution of vulnerabilities.

The most common way to resolve a vulnerability in a third party component is to update to the latest
available release of that component.

Vulnerabilities are rarely publicly disclosed before a fix is available.

However, not all of our products are designed for frequent upgrades. Some of <our?> products might
have two releases a year, or less.

Deploying these releases can take months, <and?> is labor intensive. Customers don't always deploy
every release, sometimes upgrading only every couple of years.

This kind of schedule makes it hard to stay free of known <problems?vulnerabilities?>.

Even the latest available version of an operating system may contain vulnerabilities that are not. <not
what?>

And releases are not always made on the latest available version of the operating system <?> will contain
them.

We'll cloud the chance of old components having your own vulnerabilities. It's significant. This can leave
our products exposed to being exploited and can expose us to reputation and regulatory risk.

We mitigate this risk somewhat by prioritizing the most severe vulnerabilities to be fixed first.

The severity of a vulnerability is calculated by feeding everything that's known about the vulnerability
into a formula known as the common vulnerability scoring system.

Is is this takes into consideration such things as the ease of exploitation, the likely impact of exploitation
and whether or not it is believed the vulnerability is being exploited in the wild.
Most vulnerability scanners will report CVSS score with each vulnerability found and our threat feed has
a severity for each vulnerability included.

Our processes allowable deployment teams to raise or lower the severity of each vulnerability, if they
feel it appropriate.

Where the same vulnerability is present in multiple Nokia products or multiple versions of a product,
VAMS supports assigning a different security in each version of each product.

This module has units that covers CVSS in some detail.

This is the units in this module like. However, vulnerability management, vulnerability scanning,
vulnerability scoring and vans.

These are optional units that are recommended if you want to know more about these topics. In
addition, there's additional learnings and Nokia learn and LinkedIn learning.

Thanks for completing this module you'll making Nokia and its customers safer.

== DNS-Attacks ==

Earlier in this course, you learned about the important role that the domain name system, DNS, plays on
networks.

As a reminder, DNS translates the common names that we use on a regular basis such as linkedin.com or
nd.edu to the IP addresses that computers use such as 108.174.10.10 or 34.193.237.201.

DNS uses a hierarchical lookup system where the initial request goes to a server on the client's network.

If that server doesn't already know the answer, it then asks a series of other servers until it finds the one
with the correct answer.

For example, when looking up www.wikipedia.org, an organization's DNS server first asks the root name
server.

Now, if the root name server doesn't know the answer, it tells the requester what name server is
responsible for the .org top level domain.
The client then asks the .org server who also doesn't know the answer, but that server tells the requester
what name server is responsible for the wikipedia.org domain.

The client then asks that server who does know the answer and the client receives the address and the
server's reply.

Now, DNS poisoning introduces false results into this process by inserting incorrect DNS records at any
point along the hierarchy.

The attacker can redirect traffic to the attacker system.

The attacker system may then contain a web server built to closely resemble the system that the
unsuspecting victim expected to visit.

When the victim logs onto the attacker system, the attacker is able to capture that log on.

Well executed attacks will pass credentials through to the real system and will then capture all traffic
between the two preventing the victim from noticing the man in the middle attack.

Typosquatting is an attack that depends upon people making simple typing mistakes.

It's very cheap to register a domain name.

Sometimes it's $5 or less.

Attackers engaging in Typosquatting simply register hundreds of typo variations of official sites.

Then when people incorrectly guess or mistype domain names, they wind up visiting the attacker's site
instead of the real site.

Typosquatting happened during the 2012 presidential campaign, when attackers registered all sorts of
variations on the barackobama.com domain hoping to redirect legitimate traffic.

In domain hijacking attacks, the attacker actually takes over a domain registration from the true owner
without permission.

They may accomplish this by using social engineering techniques on the domain registrar conducting a
pre-texting attack where they pretend to be the authorized owner or by stealing the access credentials
of an authorized domain administrator.

URL redirection attacks place content on a legitimate site that automatically forwards a user from that
legitimate site to a malicious site.

Attackers might do this by posting malicious content in a forum that allows redirects or by compromising
the legitimate web server.

Domain reputation systems help cybersecurity analysts identify whether traffic is coming from a known
and trusted domain or whether that domain is associated with past malicious activity, either as a
perpetrator or an innocent third party.
Threat intelligence services offer domain reputation scoring as one of their core capabilities and many
organizations integrate this domain reputation into intrusion prevention systems and other security
controls.

== Firewalls ==

If routers and switches are the connectivity building blocks of a network, firewalls are the security
workhorses.

Firewalls act like the security guards of a network, analyzing all attempts to connect the systems on a
network and determining whether the requests should be allowed or denied, according to the
organization's security policy.

Firewalls often sit at the network perimeter in between an organization's routers and the internet.

From this network location they can easily see all inbound and outbound connections.

Traffic on the internal network may flow between trusted systems unimpeded, but anything crossing the
perimeter to or from the internet must be evaluated by the firewall.

Firewalls often connect three networks together, the internet, an internal network, and a special-
purpose network known as a demilitarized zone or DMZ.

The DMZ contains systems that must accept direct connections from the outside world, such as public
web servers.

The DMZ isolates those systems because they are at higher risk of compromise.

If an attacker manages to compromise a system located in the DMZ, they still don't have direct access to
systems located on the internal network.

Older firewalls use an approach called stateless inspection that evaluated each packet separately when it
arrived at the firewall, this approach was inefficient and it also deprived the firewall of the ability to
make decisions in the context of a broader history.

Modern firewalls use a technique known as stateful inspection that allows them to keep track of
established connections, for example, when a user on the internal network requests a webpage from a
server, the firewall notes that request and then allows the web server to respond, and the two systems
then communicate back and forth for the duration of the connection without the firewall reevaluating
the request each time a new packet appears.
When the firewall encounters a new connection request, it evaluates that request against a set of rules
created by system administrators, these rules describe network connections that the firewall should act
upon, using several important characteristics.

These are the source systems affected by the rule, the destination systems affected by the rule, the
destination port and protocol affected by the rule, and the action that the firewall should take when
encountering traffic matching the rule, this is normally either allow or deny, specifying whether the
firewall should permit or block traffic that matches the description in the rule.

Imagine that we have a web server in our DMZ with the IP address 10.15.100.1, if we want users on the
internet to access that system we must write a firewall rule to permit that access, this is a rule that
permits access so we set the action to Allow.

In this case, the connection request would be coming from an unknown internet system to the web
server, since we want anyone to have access to the web server, we set the source address to Any.

Now, we do want to limit that access to the web server only so the destination system is 10.15.1000.1,
the IP address of the web server, and the web server runs only on port 80, so we specify the destination
port as TCP port 80, and that's how you create a firewall rule.

Firewall configuration simply consists of writing many rules like this one and adding them to the
configuration as new systems require access.

One of the core principles of a firewall is that any traffic that isn't explicitly permitted by a rule should be
automatically denied, this principle is known as the default deny or implicit deny rule, and it's a very
important concept that is often tested on the exam.

The current generation of firewalls are known as next generation firewalls, and by the abbreviation
NGFW.

These devices are capable of incorporating quite a bit of contextual information into their decision-
making process, they might evaluate a request based upon the identity of the user, the nature of the
application, and even the time of day.

Firewalls are often called upon to perform other network functionality as well.

They commonly serve as network address translation or NAT gateways.

In this role, the firewall translates between the public IP addresses used on the internet and the private
IP addresses used on local networks.

NAT settings are normally configured as part of the firewall rules that permit access, and because of their
role at the edge of the network firewalls also provide an ideal location to perform content filtering using
URL filtering and similar approaches.

Web application firewalls are a specialized type of firewall that is application aware.

Web application firewalls understand how the HTTP protocol works and peer deep into those application
connections, looking for science of SQL injection, cross-site scripting, and other web application attacks.
Now, when you select a firewall technology for your network, you face several choices.

First, you may choose the deployment methodology.

Network firewalls are physical devices that sit on a network, regulating traffic, while host-based firewalls
are software applications or operating system components that reside on a server that performs other
functions.

Most organizations choose to use both network firewalls and host-based firewalls to achieve a defense-
in-depth approach to network security.

Next, you'll need to choose whether to use open-source or proprietary firewall technology.

Hardware firewalls are almost always proprietary while software firewalls may be either proprietary or
open source.

And finally, for your network firewalls you'll need to choose a deployment mechanism.

The two primary approaches used today are dedicated hardware appliances that ship from the
manufacturer with firewall firmware built in, and virtual appliances that may be loaded directly into a
virtualization platform.

The benefit of both physical and virtual appliances is that they require minimal installation effort, they
allow administrators to get right down to the work of configuring security rules.

== Intrusion-Detection ==

-- Intrusion detection and prevention --

Intrusion detection and prevention systems play an extremely important role in the defensive networks
against attackers and other security threats.

Intrusion detection systems sit on the network and monitor traffic searching for signs of potentially
malicious traffic.

For example, an intrusion detection system might notice that a request bound for a web server contains
a SQL injection attack.

A malformed packet is attempting to create a denial of service.

A user's login attempt seems unusual based upon the time of day and prior patterns, or that a system on
the internal network is attempting to contact a Botnet command and control server.

All of these situations are examples of security issues that administrators would obviously want to know
about.

Intrusion detection systems identify this type of situation, and then alert administrators to the issue for
further investigation.
In many cases, administrators are not available to immediately review alerts and take action, or they're
simply overwhelmed by the sheer volume of alerts generated by an intrusion detection system.

That's where intrusion prevention comes into play.

Intrusion prevention systems are just like intrusion detection systems, but with a twist.

An IPS can take immediate corrective action in response to a detected threat.

In most cases, this means blocking the potentially malicious traffic from entering the network.

Intrusion detection systems can make mistakes.

There were two different types of errors caused by these systems and monitoring those errors is an
important part of security analytics.

False positive errors occur when the system alerts administrators to an attack, but the attack actually
didn't take place.

This is an annoyance to the administrator who wastes time investigating the alert and it may lead to
administrators ignoring future alerts.

False negative errors occur when an attack actually takes place, but the intrusion detection system does
not notice it.

Intrusion detection and prevention systems use two different technologies to identify suspicious traffic.

The most common and most effective method is called signature detection.

This approach works in a manner similar to antivirus software.

Signature based systems contain very large databases containing patterns of data, or signatures, that are
known to be associated with malicious activity.

When the system spots network traffic matching one of those signatures, it triggers an intrusion alert.

This approach is also known as rule-based detection.

The downside to this approach is that a signature based system cannot detect a previously unknown
attack.

If you're one of the first victims of a new attack, it will sneak right past a signature detection system.

The upside is that if the signatures are well designed these systems work very well with a low false
positive rate.

Signature detection is reliable, time tested technology.

The second method used by an IDS is known as anomaly detection.

This model takes a completely different approach to the intrusion detection problem.

Instead of trying to develop signatures for all possible malicious activity, the anomaly detection system
tries to develop a model of normal activity and then reports deviations from that model as suspicious.
For example, an anomaly detection system might notice that a user who normally connects the VPN
from home during the early evening hours is suddenly connecting from Asia in the middle of the night.

The system can then either alert administrators or block the connection, depending upon the policy.

The models developed by these IDS and IPS systems are often application aware, and understand how to
dissect the layer seven protocols in use during a communication.

Anomaly detection does have the potential to notice new attack types, but it has a high false positive
error rate.

This technology has several different names.

When you take the exam, know that anomaly detection, behavior detection, and heuristic approaches
are the same thing.

There are also differences in the way that intrusion prevention systems are setup and configured on the
network.

Let's talk about two different approaches.

In-band and out-of-band.

In an in-band or inline deployment, the intrusion prevention system sits directly on the network path
and all communications must pass through it on their way to their final destination.

In this approach, the IPS can block suspicious traffic from reaching its final destination.

While this approach allows an active response, it also adds the risk that an issue with the IPS can disrupt
all network communications because the inline IPS is a single point of failure.

In an out-of-band deployment, the IPS is not in the network path, but it sits outside the flow of network
traffic.

It's connected to a span port on a switch which allows it to receive copies of all traffic sent through the
network to scan, but it cannot disrupt the flow of traffic.

This approach is also known as passive mode because the IPS can still react by sending commands to
block future traffic from fending systems, but it cannot stop the initial attack from entering the network
because it only learns about the traffic after it has been sent.

== Nmap ==

-- Running and interpreting a simple Nmap scan --

Now that you have Nmap up and running on your system, you're ready to run a basic Nmap scan.

Before we run that scan, you'll need to know a little bit about the way that Nmap presents its results.

Nmap will provide you with a list of ports that it detected and then provide some state information for
each one of those ports.
There are four possible states.

Open ports are those that are listening for incoming connection requests and responding to those
connections.

Closed ports are those that seem to be accessible to the scanner, but there is no service responding to
connection requests.

Filtered ports are ports that Nmap attempted to scan, but a firewall interfered with the scan, and finally,
unfiltered ports are those that Nmap was able to access, but for some reason, wasn't able to determine
whether the port was open or closed.

There are also two special cases that you should be aware of.

Nmap might be unable to make a definitive statement about the state of a port.

In those cases, Nmap will provide you with two options that it's unable to choose between.

For example, a port marked open|filtered is either open or filtered, and a port marked closed|filtered is
either closed or filtered.

Now with that information under our belts, let's try performing an Nmap scan.

I'm back at the Mac command line, and I'm going to attempt to run a basic scan of the scanme.Nmap.org
server.

To do that, I simply type Nmap and the DNS name of the server.

The scan runs, and I see some results.

Now, before I look at those results with you, I want to run the scan one other way.

I don't have to specify the DNS name of a server.

I can also use an IP address.

So let's go ahead and look up the current IP address for scanme.Nmap.org.

I can do that using the dig command, and in my results, I see that the IP address for this server is
45.33 .32.156.

So let's try running the Nmap scan by specifying the IP address instead of the DNS name.

I'm just going to type Nmap and then the IP address, 45.33.32.156.

And when that scan completes, I get the same results that I did when I scanned the port by DNS name.

So let's take a look at these results.

We have four lines here.

It's showing me four open ports on this server.


The first one, port 22, is used by the Secure Shell Protocol.

That's a way to make administrative connections to a Linux system.

So I'm guessing from this that this is probably a Linux system.

The second result is showing me that port 80 is open on this system.

That's a port that's normally used for the HTTP service.

So there's probably a web server running here as well.

Then I see two other ports that I'm not so familiar with: 9929 and 31,337.

Those are kind of unusual ports.

So this might be something that I need to dig into a little further.

I can see from the description that 9929 is associated with N ping.

So that's probably an Nmap-related service, but then 31,337 says elite.

And actually, 31,337 is hacker speak for the word elite, and it's a commonly-used port when systems are
compromised.

So if I saw this scan result on one of my own systems, I'd be very concerned that port 31,337 is open.

You now know how to run a basic Nmap scan and interpret the results.

If you'd like, now would be a great time to pause the course and try running a scan on one of your own
systems.

Now remember, you should never scan the system unless you have explicit permission to do so.

== Security-Zones ==

Well-designed networks use firewalls to group systems into network segments based upon their security
level.

Let's talk about some of the more common security zones, and we're going to begin with the network
border firewall.

Typical border firewalls have three network interfaces because they connect three different security
zones together.

One interface connects to the internet or another untrusted network.

This is the interface between the protected networks and the outside world.
Generally speaking, firewalls allow many different kinds of connections out to this network when
initiated by a system on more trusted networks, but they block most inbound connection attempts,
allowing only those that meet the organization's security policy.

A second network interface connects to the organization's intranet.

This is the internal network where most systems reside.

This intranet zone may be further subdivided into segments for endpoint systems, wireless networks,
guest networks, data center networks, and other business needs.

The firewall may be configured to control access between those subnets or the organization may use
additional firewalls to segment those networks.

The third network interface on the firewall connects to the DMZ.

Short for demilitarized zone, the DMZ is a network where you can place systems that must accept
connections from the outside world, such as mail and web servers.

Those systems are placed in a separate security zone because they have a higher risk of compromise.

If an attacker compromises a DMZ system, the firewall still blocks them from breaching the intranet.

Network designs using this philosophy often created an implicit trust in systems based upon their
network security zone.

This approach is now going out of style in favor of a security philosophy known as zero trust.

Under the zero trust approach, systems do not gain privileges based solely upon their network location.

There are also three special purpose networks that you need to know about.

Extranets are special intranet segments that are accessible by outside parties.

For example, if you need to allow vendors to access your ERP system, you might have them use a VPN to
connect to an extranet that allows the limited intranet access that they need as business partners.

Honeynets are decoy networks designed to attract attackers.

They appear to be lucrative targets, but in reality they don't contain any sensitive information or
resources.

Security teams use honeynets to identify potential attackers, study their behavior, and block them from
affecting legitimate systems.

Ad hoc networks spring up whenever someone sets up a wired or wireless network outside of your
standard security design.

Now these networks are often planned to be temporary in nature, but they sometimes last longer than
was intended.
Ad hoc networks may present a security risk, especially if they are interconnected with other networks
but don't have strong security controls.

For example, an employee who sets up a wireless access point without using encryption and then
connects it to the intranet may inadvertently expose sensitive information to eavesdropping and create a
potential path for an attacker to enter the organization's network.

Now there are two last terms that you'll need to know when you take the exam.

Networking professionals often refer to the type of traffic on a network using terms derived from
compass directions.

NETWORK TRAFFIC BETWEEN SYSTEMS IN A DATA CENTER IS CALLED EAST-WEST TRAFFIC WHILE TRAFFIC BETWEEN
SYSTEMS IN A DATA CENTER AND SYSTEMS LOCATED ON THE INTERNET IS CALLED NORTH -SOUTH TRAFFIC .

Either type of traffic may be regulated by a firewall if it crosses security zones.

== TLS and SSL ==

Digital certificates allow for the secure exchange of public encryption keys over otherwise untrusted
networks.

Transport encryption technology, such as Transport Layer Security, or TLS, uses those certificates to
facilitate secure communication over public networks.

Let's explore TLS by describing the process the two systems follow when they wish to set up an
encrypted session protected by TLS.

First, the client sends a request to the server asking that the server initiate a secure session.

This request includes a list of cipher suites supported by the client.

Now it's important to understand that TLS is only a protocol that uses other cryptographic algorithms.

TLS is not a cryptographic algorithm itself.

Therefore, you can't encrypt something with TLS.

You can use TLS to apply other encryption algorithms.

The listing of cipher suites sent by the client to the server is a laundry list of the encryption algorithms,
hash functions, and other cryptographic details that the client understands.

Those cipher suites are only as strong as the algorithms that they include.

Therefore, it is possible to use TLS in an insecure manner by choosing a weak or insecure cipher suite.
Once the server receives that request from the client, it analyzes the list of cypher suites that the client
proposes and compares it to the list of algorithms supported by the server.

It then sends a message back to the client with two pieces of information.

First, the server tells the client which of the cipher suites it would like to use for the communication.

Second, the server sends the client the server's digital certificate, which contains the server's public
encryption key.

When the client receives the server's digital certificate, the client checks what certificate authority issued
the certificate and uses the CA's public key to verify the digital signature on the certificate.

It also verifies that the server name on the certificate matches the DNS name of the server, and that the
certificate has not been expired or revoked.

If all of those things check out, the client knows that it has the correct public key for the server.

Once the client is satisfied about the server's identity, the client creates a random encryption key called
the session key.

This is a symmetric encryption key that will be used for this one communication session between the
client and the server.

The client then uses the server's public key to encrypt the session key and sends that encrypted key to
the server.

When the server receives the encrypted key, it uses its own private key to decrypt the session key.

The two systems may then communicate for as long as they like using that session key.

Once they close the connection, the session key is destroyed and the TLS handshake starts over the next
time the two systems wish to communicate.

One quick exam tip.

Session keys are also known as a ephemeral keys.

If you see the term ephemeral key on the exam, they're just talking about session keys.

You may also hear about an encryption technology called the Secure Sockets Layer, or SSL.

SSL was the predecessor to TLS, and it works in a very similar way.

However, there are known security flaws in SSL, so it should no longer be used.

Unfortunately, many people use SSL as a generic term when they're really talking about TLS.

This can be very confusing, so be careful to dig deeper whenever you hear the term SSL being used.

== Wireshark ==

-- Outlining the benefits of Wireshark --


Although there are many other packet analysis tools available, the tool I prefer is Wireshark, an open
source tool with a rich graphical user interface and many built-in features.

Wireshark is the tool that we can use to baseline the network and then actively monitor the changes and
identify threats and respond more quickly to remove them from the network.

In addition to Wireshark, there are other packet analyzers.

Cain and Abel can recover passwords by sniffing the network and can record voiceover IP conversations.

defined{packet capture}

tcpdump is a protocol analyzer that runs from the command line and NarusInsight, formerly carnivore
can monitor all internet traffic.

Network administrators should be familiar with packet analysis.

As we can see on this webpage, Cisco builds Wireshark into the Cisco Nexus 7,000 series along with
many other devices.

If you've never used Wireshark, then go to wireshark.org.

The home page has resources where you can download Wireshark, learn about it and enhance your
capture abilities with some of the add-ons and interfaces.

We'll select download.

Once the installation is complete, you can run Wireshark.

Here's the splash screen where you can begin your capture.

Up on the top, you'll see some menu choices such as file, edit, view, capture, analyze, statistics, tools and
help.

You'll also see on the second row, some commonly referenced icons, some of which are grayed out
simply because there's no packet captures to reference.

Now, when you're ready to capture, I'll go to capture and then options and show you this interface.

And we know we want to capture on Wi-Fi.

I'm not going to put any filters, we'll just say start.

Now, we see the display filter up here.

And then once you begin your first capture, you'll see the three panes and this is what Wireshark
defaults.

The packet list, and that's where you see all of these packets.

The details, this shows you the detail of one single packet or frame and down below, here's where you
see the packet bytes pane really doesn't help me in analysis.

So, I generally remove this by going to view and unselecting packet bytes.
And then this gives me a little bit more landscape when I want to do some analysis.

I'll stop the capture.

And now we can begin our analysis.

With over 5 million downloads of Wireshark every year, network analysts who have skills in packet
analysis have set themselves apart.

Learn how to effectively use Wireshark in order to troubleshoot, actively monitor changes on the
network and identify threats.

== M04.Central-Access-Control ==

Centralized access control

- [Instructor] All right, let's talk about centralized access control.

So, in the world of access control administration we have the centralized, the decentralized, and the
hybrid approaches to controlling access.

In the centralized approach, one entity, or one department, controls access, or governs what the access
for subjects is to different objects, all the objects that are in a particular security domain.

In the decentralized approach, each object owner, or each, let's say, resource owner, configures or
determines their own security policy and what people can get access to.

So, in the decentralized approach you may have some inconsistencies from device, to device, to device,
or from asset, to asset, to asset, from file, to file, to file, as to what people are getting access to because
each individual asset owner is determining what the policy should be.

In the centralized approach, since it's all going through this one entity, this one decision maker, that
could be an IT department, it could be a domain controller, it could be something else, it's centralized
nonetheless, it's gonna be a uniform policy throughout all the different objects in the organization, or in
the security domain, as we would say.

Because that one place, that one entity decides, or has the policy configured in it, for what you're
allowed to get.

Well, the only downside to that approach is you'll have a uniform policy, which is good, but it may act as
a bit of a bottleneck.

If you need to get a security permission changed it's got to go through this one place, this one
department.

You submit a request to the IT department, and they put that on the backlog of things they need to go
do, whereas with the decentralized approach it tends to be a bit more nimble, you can go straight to that
asset owner and say hey, change this, and change, and it's done.
If there is a benefit to the decentralized approach, that's it, is it might be a bit more nimble, but for most
enterprises, most organizations, we use a centralized approach, or something like a hybrid, where we
might have, let's say, a US department IT and a European department for IT, and those are the two
different entities that govern control, or it might be a US domain controller that's configured with a
certain policy, and a European domain controller that's configured with a certain policy for all of the
objects in each of those domains, the US domain and the European domain.

So, when we talk about centralized access control, we're almost always using some form of AAA
protocol.

Now, the AAA stands for authentication, authorization, and either auditing or accounting, depending on
who you learned this from.

Back in the day if you learned this from Microsoft curriculum it was auditing, that's what the last A stood
for.

If you learned it from a Cisco curriculum it was accounting.

Either way, you could see them in different references all over the place, but that last A just means
logging in one form or another.

Authentication is proving you are who you claim to be, authorization is giving you some permissions
once you've been authenticated, you are the real John Smith so now you're gonna get read and write
permission to this thing, and the auditing/accounting piece is all about logging your actions once you're
given access.

Now, for this we have AAA protocols.

The RADIUS, TACACS+, and Diameter protocols the one we'll talk about here, and they're the sort of
mainly used protocols for this, so let's get right into it.

RADIUS, the remote authentication dial-in user service, was an old protocol developed originally for
dialing in, for dialing up with a modem and getting access.

It's a remote access protocol.

It's the de facto standard for authentication in a centralized approach, the de facto AAA protocol.

In a RADIUS system you log in to some VPN endpoint, let's say, a network access server, a NAS, and that
NAS, that network access server, is the RADIUS client to a RADIUS server.

Now, there may or may not be some RADIUS proxy server in the middle, but the VPN server that you log
into is the client in the RADIUS configuration to a RADIUS server, which is actually the sort of policy
server.

So, you call up the VPN endpoint, the network access server, the NAS, and say hey, I'd like to log in.

The NAS says okay, hold on.

What's your username? And it calls up the RADIUS server, as it is acting as the client, and says, I got John
Smith here who wants to get in.
Okay, what's his password? And the network access server gets your password, sends it back, okay.

The RADIUS server likes what it sees, that's the authentication piece, that's the first A.

Then it says okay, here's what John Smith can have access to, he can have access to these subnets, or this
thing, or that resource, and then it says, and by the way, we want you to track these activities about his
access.

So the network access server now knows to log certain activities about you, to give you access to certain
subjects, or certain subnets, and not others, and it now knows that you're the real John Smith, 'cause it's
passed on that authentication information and been verified by the RADIUS server.

So, those are the three As, it authenticated you, it authorized you with some access, and then it's
tracking your movements.

It's an open-source protocol, and it's been integrated into just about any product you can imagine, and
it's this client-server approach where the RADIUS client is the server, the VPN server, that you log into,
and it is a RADIUS client to the RADIUS server.

Now another protocol that's still used today is TACACS+.

Well, back in the old days there was a protocol called TACACS, the terminal access controller access
control system.

How many times can you put access control in one acronym? Well, it was an old protocol back in the
'80s.

Cisco created a newer version of it called TACACS+, and it's a proprietary protocol used by Cisco, but it's
still quite a bit in use because Cisco has a sort of ubiquitous footprint in IT.

Now, each subtle difference between RADIUS and itself are it splits the AAA pieces, the authentication,
the authorization, and the auditing, into three separate profiles or policies that can be configured
separately.

So it's a bit more flexible when it comes to administration in that respect, it can be very granularly
configured.

It also tends to provide more protection for the client-server communication that's from the VPN server,
which is the client, to the RADIUS server, which is the server, than does RADIUS.

RADIUS uses UDP, the user datagram protocol, which is a bit less reliable, and it also only encrypts
certain elements within its communications from RADIUS client to RADIUS server.

It'll encrypt, of course, things like passwords and stuff that is sent over the wire, but everything else it
pretty much leaves in clear text over these UDP datagrams.

TACACS+, on the other hand, uses TCP, which is a bit more reliable, and it encrypts the entire
communications channel between the RADIUS client and the RADIUS server.

Now remember, that's not between you and the VPN server, you're gonna be using some VPN protocol
there.
It's between the VPN server and the RADIUS server, which is the security policy server.

And finally, Diameter is the new and improved version of RADIUS, because of course the diameter of a
circle is twice the radius.

Now again, it's another open source protocol for anyone to integrate into their product and anyone to
use, and it improves upon RADIUS in that RADIUS was limited to certain methods of authenticating
users, particularly it supported the SLIP, the SLIP, the serial line interface protocol, and the PPP protocol
for encapsulating connections, the point to point protocol, and those protocols only supported three
different authentication protocols, the password authentication protocol, the challenge handshake
authentication protocol, and the extensible authentication protocol, which we'll talk about in other
videos.

Well, Diameter doesn't have those problems, it can support any different encapsulation, and it can
support any type of authentication because it's extensible.

We have this base protocol, and then we have these sort of plugins which any vendor can write to
extend the protocol, let's say you come up with some new way to authenticate a user, some smart card,
or biometrics, or retina scan, or whatever it is that you wanna do, you can write a Diameter driver which
plugs into Diameter and then extends the protocol.

You can also write different extensions for different types of traffic, or different use cases.

Well, Diameter was great at that because it allowed for extensions to be made for newer technologies at
the time.

For example, when Diameter was being built we were doing a lot of cell phone technology, as we still
are.

Well, RADIUS wasn't particularly good with associating with one tower, and then disassociating, and then
associating to another provider's tower and borrowing it for a couple of minutes while you walk by that,
or drive past that, and then disassociating with that and then reassociating with something else.

RADIUS wasn't very good at that, but Diameter was built specifically to handle such scenarios.

So you can move between different service provider's endpoints and still maintain authenticated access
as you roam around the greater integrated network, and it also supports much better control for
realtime communications, like we have voice over IP, streaming video, all of those things that we do
these days and take for granted.

So that's Diameter, the new and improved RADIUS, and we're beginning to see more implementations
that take on these types of features.

== Identity ==

-- Authentication, authorization, and accounting--


There are three key concepts that are foundational to identity and access management.

Authentication, authorization, and accounting.

These concepts are the basis of everything we will build on, so this is a good one to spend some extra
time understanding.

Let's talk about each one of them separately.

Authentication is the process of recognizing a user's identity.

This is done by validating who they claim to be.

How do you validate who you are? Usually, it is some additional data that is specific to that person, and it
should be hard to reproduce or guess.

If you validate your credentials, such as a password correctly, you get access.

If not, access is denied.

Think of authentication as a two-part process.

A good way of thinking of this is something you are and something you have.

Check out the next video for a deep dive into authentication and the different ways to validate who you
are.

Just remember, others need to know who you are, and you have to prove it.

Authentication is proving who you are.

Authorization is determining what you are entitled to have access to.

Authorization is defined as giving someone permission to do or have something.

Another way to say that is giving the user access to a resource.

An important note is that authorization always takes place after authentication.

When you are on a site, say Globe Bank, and put in your username and password, which is
authentication, you get access to your transaction history, which is giving you authorization to that
information.

Authorization is the key element that organizations can use to control permissions to important
information.

When organizations build proper access controls through authorization, their users can access what they
need when they need it, but nothing more than they need.

When implemented properly, it is one of the strongest security controls a company can implement, with
the greatest impact.

Once authentication and authorization are in place, the way to ensure they are working properly is the
use of the third component of identity and access management, which is accounting.
Accounting is the process of measuring the resource the user consumes while they have access.

Some people call this monitoring when someone accesses a system.

Examples can be the time logged into a system, the data they reviewed or changed while in the system,
or even where they logged into the system from.

The reason the practice of accounting is important is to ensure that the access you have granted to users
is being used as intended, and to ensure that access to your systems by someone not granted access is
not occurring.

It should be done on a regular basis for your most critical systems, since these tend to be the most
targeted places for hackers to try to take advantage.

Now let's pull all three concepts together with an example.

When you go into your workplace, you may have a badge to prove who you are, which is authentication.

Once you swipe your badge to your office floor, if you have permission to access that particular area, the
locked door will open, which is authorization.

On a monthly basis, the security group will run a report to ensure that only authorized people have
accessed the building.

That is accounting.

The three key concepts for identity and access management, authentication, authorization, and
accounting help organizations maintain proper access to resources and provide a process for checking
that it is accurate.

== M04.niam ==

The fundamental function of netguard identity and access manager is to separate operations Users from
the knowledge of end system credentials. Users do not have the knowledge of these end system
credentials on network elements, servers, graphical user interface such as element management
systems, network management systems, OSS systems, ancillary tools, or any web application.

Operations users simply need to know Their personal Active Directory or corporate windows Credentials
in order to 1st authenticate against net guard identity and access manager Netguard identity and Access
manager takes care of all authorization and will perform forensic session logging for each and every
session initiated by an operations user.

If an operations user needs to have access to a graphical user interface, they'll actually need to log into
the net guard identity and access management system client. Here you can see that this has already
been achieved and this user OPS user is presented with a list of all of the network devices and systems
that they have access to. A simple right click will show the actual launch methodologies that are
available against that network element, device or system.

Here I'm selecting a network element that not only has the capability of being reached through SSH, but
actually has a local craft interface that happens to be a web interface.

A user selecting this and clicking, you'll actually see that the net guard identity and access management
system product will automatically log in for that user on their behalf.

Netcard identity Access manager launched Chrome in this case, went to the appropriate URL and
injected the appropriate user ID and password on behalf of this user. This session is actually being video
logged for forensic analysis.

OAuth

- [Instructor] SAML served the world well.

But as the demands of modern applications grew, we needed something different, so we created
something called as OAuth, followed by OIDC.

So let's first understand OAuth.

Now, what I'm about to show you here is an oversimplified picture.

Let's go with it for discussion sake.

We know SAML has been around for a while and it is well in use today in the modern authentication
protocols arena.

But as I mentioned, the word modern authentication doesn't have a strict definition.

So this might be the oldest sibling in the modern authentication arena.

SAML is great, but it was designed with web applications in mind.

Then came the smartphone revolution, SPAs, interesting application surfaces.

So we created OAuth, a delegation protocol.

I'll explain in a moment what that means, but this allowed new kinds of application scenarios, both
functionality and platforms.

But OAuth was a very, very loose standard.

So we needed something a little bit better defined, something everyone could agree with, and that was
OIDC, OpenID Connect.
Now, I say that this is an oversimplified picture because SAML today is still fully in use.

For some scenarios, it even makes sense.

Well, it is also fair to say that the world has resolutely agreed on OIDC.

There are so many SDKs.

There are so many accepted standards.

It is not surprising that Google Cloud Platform, Amazon, Okta, and yes, Azure AD, all of them support
OIDC.

Let's understand OAuth a little bit better.

As I mentioned, it is a standard for delegation.

What does that mean?

It means that I, the user, I am allowing an application A to do B, some kind of action or permission, on
my behalf.

Now, couple of very important points here.

At no point does A get my password or credential, MFA or whatever it might be, and B is a set of
permissions, something that I've consented to.

It means that I, the user, I am allowing an application A to do B, some kind of action or permission, on
my behalf.

Okay, is this clear?

I think an example will make this clear.

So let's look at that next.

So here is a common example.

Let's say I'm surfing the internet and I land onto some absurd news site and I see an absurd article that
I'm compelled to leave my opinion on.

What am I doing here?

I'm requesting to use a service from some site, that service being I wish to leave a comment.

I want to engage on their discussion forum.

Okay, but some site needs to know who I am.

So some site says, "Dear user, who are you?"

Or rather, it says, "I wish to read your profile."

Okay, but the user doesn't share their profile directly.


The way this works is that you would log in using some other site, like a social media site.

So here on the social media site, I would authenticate to the social media site.

At this point, you may have seen this on the internet, you are presented with a consent dialog.

It will say something like, "Do you allow some site to access your username?"

Okay, you would say allow.

And frequently, you may see things like, "I want to see your username.

I want to see your friends list.

I want to see your home address.

I want to see your phone number.

I want to see your entire history.

Post on your behalf."

You see where I'm going with this.

There's a potential for abuse.

And users, they see this list of consents and they just say, "Allow."

So anyway, let's ignore that for a second.

So let's say that the social media site presents the user with a dialog box saying that some site wishes to
read your profile and you authenticate and allow.

And with that, you're able to use the service because now some site is able to read your social media
profile.

So this is how OAuth works in practice.

What you have done here is that some site never got your credentials, but some site was given a
permission to read your profile, do something on your behalf, read your profile on your behalf.

OAuth is great, but it had a few problems.

First, there were no standards around the definition of what comprises a user identity.

Is it the subject claim?

Is it an ID claim?

Is it the user ID claim?

Is it the UPN or is it a pretty ID?

All these examples I gave you are examples in use that major web applications have used to define what
user identity means.
So it made working together across multiple platforms just difficult.

Another problem was oversharing and abuse.

And I must say that this problem is not specific to OAuth, but it was first seen in OAuth, that people
would just ask for all kinds of permissions and users would just allow without reading what it meant.

Then there were phishing attacks.

Again, this problem is not unique to OAuth, phishing is a common problem, but OAuth kind of really over
accentuated it.

So OAuth was great, but we needed something better, something more standardized, and that was OIDC.

Role-based access control

- [Instructor] Role-based access control systems simplify some of the work of managing authorizations.

Instead of trying to manage all of the permissions for an individual user, administrators create job-based
roles and then assign permissions to those roles.

They can then assign users to groups that correspond to those roles as well.

Now this is a little more work up front, but it makes life much easier down the road.

When a new user arrives, administrators don't need to figure out all the explicit permissions that user
requires.

The user just needs to be added to the appropriate groups and all of the permissions will follow.

Similarly, when a group of users needs a new permission, the administrator doesn't need to apply it to all
of the individual users.

The permission can be assigned to the group and all users with that role will receive the permission
automatically.

Let's look at an example.

Imagine Alice Jones comes to our company as a new supervisor in the accounting department.

As part of her job, she needs to handle all the work of an accounting clerk.

Now, administrators can go ahead and assign her to the accounting clerk group and she will
automatically receive all related permissions.

She'll also inherit changes as the permissions assigned to that group change.

Alice might also need advanced privileges reserved for accounting supervisors, so administrators can also
assign her to that role.

With these two role assignments in this example, Alice received six permissions and her permissions will
change with those roles as the business needs change

== Belllabs-5g ==
This module will begin with 5G security requirements before moving on to discuss 5G security standards
and enablers, and finally the end to end 5G security architecture.

Here are our learning objectives for this module by the end of this module, you'll be able to talk about
the security challenges that 5G will bring and the corresponding measures that will need to be put in
place. You'll be able to give examples of potential attack vectors and the main 3GPP 5G security features.
This will help you develop an understanding of how to protect the cloud network, function virtualization
and network slicing in 5G.

Finally, you'll understand why a holistic and efficient end to end security solution must be implemented
and automated using artificial intelligence and machine learning.

We begin with an overview of the security requirements in the 5G environment. After that, we'll go into
specifics about 5G security standards and features that enable network security before we discuss the
security architecture as a whole.

Given what you already know about 5G and the changes to the architecture, the devices and the way
humans and machines will use networks, what is 1 potential new security challenge that you can think
of? How about two or three?

Give yourself a moment to come up with some potential security threats. When you're ready, move on
to see if you are on the right track.

With the explosion in the number and sophistication of use cases, we see a massive increase in security
risks and regulations. The new use cases are categorized into three areas, massive machine type
communication, ultra reliable low latency communication and enhanced mobile broadband. All may
potentially lead to network attacks being more severe. Take an example like remote surgery, which relies
on ultra reliable low latency communication. Having the connection lag could mean the difference
between life and death.

Such critical services absolutely must be protected.

With massive machine type communication, we'll soon see an enormous amount of interconnected
devices. How will we know they aren't riddled with exploitable weak points? Because insufficient
security shouldn't limit the potential of 5G, the Internet of Things must be protected. The software
defined technologies that come with 5G introduce new vulnerabilities. The seemingly unlimited
broadband user experience must not be compromised.

And let's not forget, 5G or not, humans are still the weakest link in the security chain. Human error and
social engineering exploits are typical risks that a 5G security management system will have to detect
and mitigate, among other security issues.
Beyond 5G networks, bringing disruptive technologies and new ecosystems, the threat landscape is also
expanding in the diversity of threat actors and sophistication of attacks. Whatever technology is available
to the network, it's also available to malicious actors. Note that the elements listed here are only
examples, not a comprehensive list.

This increase in number and variety means that enterprises are facing more and more challenges. First,
conventional security with fragmented, unconnected tools is becoming obsolete.

Instead, we need proactive cyber security measures with real time response, particularly for highly
sensitive applications such as E health and power grid control.

Furthermore, with 5G networks expected to become the backbone of many critical applications, the
integrity and availability of those networks will become more important than ever. 5G will therefore
need end to end monitoring, detection and response capabilities.

The regulatory and compliance landscape is also becoming more stringent and is faced with increased
liabilities in areas like privacy and data protection. This creates its own set of security challenges for both
enterprises and communications service providers. Consequently, security teams will have to address
new and more diverse skill sets to cover the wide security spectrum.

Sophisticated attacks coming from various threat actors often start with simple attack vectors, which are
then combined.

While it would be impossible to give an exhaustive list of attack vectors, what we can do is categorize the
majority of attacks into six vector groups.

These six vector groups are attacks from compromised user equipment and IoT devices connected to the
network. Considering the massive amount of connected devices, potentially with insufficient security.

Attacks from physical access to the gNode B, So someone having access to the network equipment at the
base station.

Side channel attacks by third party virtualized network functions. These attacks often eavesdrop on
network activity, exploiting information about power consumption and timing. One of the possible
vulnerabilities that arise with the introduction of software defined technologies.

Attacks from the Internet.

Attacks from partner networks or Internetwork Packet Exchange


And insider threat or human error - always the weakest link in the security chain. <-- Always? If so,
whose decision is that.

Attacks from compromised UE and IOT Devices.

To ensure overall security, we need to protect each part of the network from attacks, separate them to
avoid contagion to the larger network. If an attack is successful, and lastly, implement a holistic approach
to do so. 5G security needs to address 4 requirements built in end to end security, ensuring that the
elements in all networking domains have built in security and have implemented the functionality
needed for end to end 5G security.

Well integrated defence in depth systems combining different technologies and measures to create
multiple layers of security across the entire network.

End to end security orchestration providing an overarching security lifecycle management solution that
orchestrates risk and threat prediction, but also includes seamless workflows for prevention, detection
and response measures.

Security automation creating an adaptive security architecture driven by intelligence and analytics is
essential to automating security.

These 5G security requirements are best served by a layered approach that leverages AI and machine
learning to constantly keep up with the dynamic threat landscape.

Defense in depth means the different and complementary security tools are deployed in layers. If an
attacker manages to pass one of these layers, there are still others in place. For example, a computer
would use a firewall as well as a malware scanner, an intrusion detection system, et cetera. It also means
that each layer is protected independently of the others.

There are two advanced mechanisms we can expect to see in 5G networks. They are massively adaptable
AI and machine learning based security and scalable controls at the edge. In particular, the distributed
edge clouds both are essential to build security mechanisms that are automated, adaptable and thereby
efficient. Both are also part of future defence in depth security for 5G systems, massive amounts of data
will be collected from across the network and analyzed in real time, helping detect and deflect attacks
rapidly.

We can divide the security approach into two main domains, protecting each part of the network
covered in Section 2 of this module, and AI and machine learning automation for real time adaption and
efficiency described in Section 3.

Based on this overview, which of these factors do you think make security so critical for 5G

select all that apply.


-Massive amounts of connected devices

-Massive broadband running applications, e.g. virtual reality

-Software-based technologies

-Highly sensitive applications, e.g. remote surgery

These examples will all be issues for 5G security, except massive broadband running applications such as
virtual reality.

Now that we've got a conceptual view of the security challenges in 5G, it's time to dig into some details.
In this section, we describe key elements in 5G security, including 5G standards.

Before we introduce the key enablers protecting the 5G network, let's bring back to mind at the diagram
of potential attack vectors.

In a 5G network, as in every mobile network, the edge is inherently exposed to attacks and must be
secured carefully.

Note that the edge referenced here is much larger than the edge cloud. It includes every single physical
and virtual entry point to the network.

The security approach aiming to protect every entry point is the edge based perimeter. With 5G, we
need to consider the dynamic nature of network functions in the cloud infrastructure as well as the
mobility of user equipment. As both of these present unique challenges in demand, massively scalable
controls at the edge.

This section has 2 main parts where we'll explore security standards and enablers.

The first one focuses on 3GPP standards and the 5G network itself.

The second is security for cloud network function virtualization, software defined networking and
network slicing.

Following these two parts, we'll briefly talk about security assurance, which is quickly gaining and
relevance.
First, a few words about the main security areas and standardization bodies affecting 5G networks.

3GPP standardizes some 5G security elements, such as the usage of authentication and authorization,
key agreements, cryptographic algorithms, and subscriber privacy. All of these elements contribute to
the basic security requirements of confidentiality and privacy, and they'll have to be kept up to date to
uphold these requirements, so the 3GPP standard will continue to evolve.

It's also interesting to note that most of these elements are supported by well known IETF Standards.
IETF, which stands for Internet Engineering Task Force, is the body that defines standard Internet
operating protocols heavily used in 5G for authentication and encryption.

Let's start the ball rolling with an overview of key 5G security enablers, including all the crucial security
elements as described in the 5G core module.

This is how the 5G architecture is depicted by 3GPP. First, take note of all the different core elements.
Note also the dashed line that distinguishes between a visited and a home public land mobile network.

Presented here is a roaming scenario with two independent 5G networks in the lower left hand corner
we see the user equipment representing, for example, a person with a mobile device in the roaming
network.

Looking at the icons on top, we have network functions in the control planes with the lower part
showing the functions located in the user plane. You can refer to the core module of this series for a
refresher on this.

The idea in the service-based architecture of the 5G core is that each function can in principle call each
service of every other function.

However, 3GPP specifies exact procedures for this. Essentially, it's not so much defining that this function
may call that function, but rather how functions may call each other. This helps the specification remain
future proof in a dynamic cloud environment.

We now come to the heart of the matter. The specific network functions that play a major role in the
3GPP security architecture, especially in authentication procedures.
Looking at the Home Public Land Mobile network, we begin with the Unified Data management. This is
the function that gives access to subscription data and provides the data for user equipment
authentication. It's accessed by the authentication server function.

The exact role of the authentication server function plays depends on the authentication method, which
we'll talk more about later. You can also see AUSF and UDM in the visited network. They're grayed out to
signify they're not used by the user. Equipment presented here.

Moving over to the visited public land mobile network, the access and mobility management function
handles part of the authentication procedure in the visited network. After a successful authentication, a
security Association is established between the user equipment and the AMF which secures all signalling
between the equipment and the core network.

Now, if we were in a non roaming scenario, all functions would be in one network, but the
authentication procedure would be the same. The roaming scenario however is interesting because it
allows us to show you a new component introduced with 5G. The Secure Edge Protection Proxy.

As its name indicates, the SEPP protects the edge of a public land and mobile network against other
similar networks. It also handles all control plane communication with other PLMN.

Let's say that in an authentication procedure, the access and mobility management function in the
visited network pulls the service of the authentication server function in the home network. In this
scenario, the AMF will send a request to the visited SEPP. The visited SEPP routes it to the home SEPP,
and the home SEPP, then delivers it to the AUSF. What's important is that the SEPPs apply protection
mechanisms to the traffic before sending it over the IP exchange. While it would be easy to simply
specify the use of IP SEC tunnels or the like for all traffic between the SEPPs, this would not satisfy the
operational requirements of the IP exchange. To work around this, 3GPP is specified very flexible security
mechanisms that allow networks to selectively encrypt some of the information elements of a message.

There's also a new central function called the network repository function, or NRF, it's tasked with
registering new network function in.

And detecting other service instances, but it also has a major security aspect as per 3GPP specification,
each service called to a network function is subject to authorization by the NRF. This mechanism makes
the NRF central point to store and enforce all policies concerning which requests are possible.

Finally, the gNode B is crucial for security as it handles access stratum security, meaning the protection
of user plane traffic, and radio control traffic over the radio interface.

Let's pause here and check our progress so far. Evaluate whether these statements are true or false and
click submit to confirm your answer.

That's correct. The network repository function authorizes all service calls that network functions make
to other functions. 3GPP standards do not prevent one function from calling another per SE, but they do
determine the procedure for it. Secure edge protection proxies do not expose PLMN's to threats from
other similar networks. Instead, they protect the networks from such threats.

Now let's examine in more detail the main new features of the 3GPP standardizations for 5G security
architecture. For reference, the relevant standards document is the 3GPP technical specification 33.501
release 15 titled Security, Architecture and procedures for 5G System.

Let's begin with 5G's new access agnostic authentication framework with improved home network
control for roaming. Click next to proceed.

Compared to 4G, 5G offers more flexible and extensible authentication, specifying 2 authentication
methods. One of them is called 5G AKA, it stands for authentication and key agreement and is an
enhancement of the Evolved Packet system AKA used in LTE.

The other makes use of the EAP framework, which stands for Extensible authentication protocol and is
specified by the IETF. EAP allows multiple authentication methods, but only one is specified to be used in
a visited network, AKA Prime. An EAP method that makes use of similar mechanisms as 5G AKA. Note
that in a private network using the 5G system, additional authentication methods can be used with the
EAP framework.

Both 5G and EAP AKA prime provide assurances to the home network that the user equipment is present
in the visited network.

They're called access agnostic because, unlike in 4G LTE, both of these authentication methods can be
applied in 3GPP access as well as non 3GPP access.

Let's see how this works in simple terms. The user equipment connected to the visited network requests
to establish a signaling connection.

The security function in AMF then sends an authentication request to the authentication server function
in the home network which forwards it to the UDP network function. The decision is then made whether
to authenticate the user with 5G AKA or EAP AKA Prime based on the subscription data.
Proceed to enhanced subscription privacy by clicking next.

The International mobile subscriber identity is a number that uniquely identifies every user of a cellular
network. It's stored in the SIM card as well as in the Unified Data repository. In 5G, the IMSI is replaced
by a more generic subscription, permanent identifier, SUPI. If the IMSI or SUPI is sent in plain text over
the air, a rogue base station located between the user's phone and the operator's real base station could
fool the user equipment to send its international mobile subscriber identity for early authentication. This
kind of attack is named in IMSI catching and may be prevented in 5G with a feature called Enhanced
subscription privacy.

5G is more robust against IMSI catching attacks than 4G as it doesn't send any IMSI or SUPI over the air
to the network. Instead it sends something called the subscription concealed identity, an encrypted
version of the permanent identifier. This feature is mandatory to support, but not mandatory to use the
concealed identity can be reverted to its original form by the subscription identifier deconcealing
function Sid if takes place in unified data management and is protected via access rights.

So that only the home networks element may present the request.

The SUPI is then sent back to the AMF.

Next up, user plane integrity protection or integrity protection on the radio interface.

In 4G, the integrity of control plane traffic is protected while user plane traffic is only encrypted. So in 4G
it's impossible to detect if a packet was altered in the user plane traffic or if a fake packet was injected by
an attacker. Researchers have shown examples of attackers altering encrypted IP addresses and
messages to redirect HTTP requests to malicious websites.

To prevent this, 3GPP has specified user plane integrity protection as mandatory to support. While it
makes 5G more robust than 4G for integrity protection, it is optional to use, and there may be also
technical limitations whereby the feature only works with certain spectrum bands.

Next, a glance at secondary authentication.

Let's see an example of EAP based secondary authentication. After the user equipment has been
authenticated to the 3GPP network and may also be necessary to have an additional secondary
authentication to a data network to which the user equipment connects via the 3GPP network. In 5G,
the 3GPP network supports the transport of EAP messages for this secondary authentication data
network operators therefore have the full range of EAP methods available for authentication between
user equipment and data networks.

Now let's talk about security for service based interfaces.

Service based interfaces should be protected on the transport layer by transport layer. Security TLS is
mandatory to support an optional. To use, click the button to read more about these standards.

Authentication in the TLS handshake is based on certificates. In addition, 3GPP specifications make each
service call subject to authorization by the network repository function. More specifically, a consumer
network function that wants to call a service of another producer network function must obtain
authorization for this call from the NRF. The NRF checks whether to authorize the request based on
configured policies. If the request is cleared, the NRF passes and authorization to.
Look into the consumer.

When the consumer network function calls the service, the producer network function answers the
request only if it includes a valid token. The NRF hereby acts as an open authentication or Oauth
authorization service. This authorization mechanism is mandatory to implement, but not mandatory to
use. For example, an operator may decide to apply it only to external service requests coming from other
public land mobile networks.

Lastly, we look at security connecting separate public land mobile networks.

With LTE, it's often being the case that the IP exchange network interconnecting wireless networks is not
suitably secured and several PLMN leak subscriber information on fraudulent requests from the IP
exchange.

To combat this, 3GPP specifies significant enhancements for interconnection security between 5G public
land mobile networks. These include the new network element set and PLMN interconnection interface
called the N32 interface.

Let's look at 2 separate public land mobile networks here. A and B1 is a home network and the other is a
roaming one. We can see that all signalling traffic between the networks transits on the N32 interface
and through steps which acts as non-transparent proxy nodes. This enables the network to effectively
filter the traffic coming in from the interconnect and immediately discard any malformed N 32 signalling
messages preventing spoofing attempts.

The Home Network set performs mutual authentication and negotiation of cipher suites with the step.

In the roaming network, before conveying network functions, service-related signalling over the N 32
interface.
Each HTTP and transport layer security are used as the transport security protocols to encrypt the
connection and authenticate endpoints. The result is secure encryption, integrity and authentication of
signalling between 5G networks, more flexible security when compared to 2G 3G and 4G. Even with the
firewall and the possibility for business models between providers that still allow each operator control
over what is modified.

Using what we've covered so far, can you take these three elements and place them correctly?

That's right.

After this overview of the main news security features in the 5G network in Part 2, we shed light on the
security elements for cloud network function virtualization, software defined networking and network
slicing.

Let's start with a quick word.

But the bodies that create standards for these security elements, the European Telecommunications
Standards Institute or ETSI, provides comprehensive standardization for cyber security, security
algorithms and secure elements in IOT & M to M, etc. Security groups work closely with stakeholders to
develop standards that increase privacy and security vietze group NFV also provides guidelines for
network function virtualization security, which is particularly relevant to 5G. Another body is the
International Telecommunications union. ITU, it's telecommunication standardization sector Security
Group has security recommendations for software defined networking and cloud computing.

To secure the cloud in 5G networks, we use the defence in depth model rather than defeating an
attacker with a single strong defensive line. A defence in depth is built in and layered. If one of the
perimeters is breached, there are still multiple defence layers to block the attacker. This strategy was
pioneered by the National Security Agency and is a perfect fit for cloud environments and 5G networks.

If an attack makes it through the defence mechanisms for the network, it will then have to crack the
security at the platform level and then the application layer before it can even attempt to get at the data
layer.

As we'll see later, these defence layers are continually optimized using information gathered from the
network. But for now, let's look at a few more details on how this applies to network function
virtualization and software defined networking.

You'll remember from the cloud module that network function virtualization along with management
and network orchestration, is a crucial element in the 5G cloud. It provides the flexibility and elasticity
needed for a variety of use cases. Naturally, the sharing of infrastructure by diverse virtualized network
functions introduces a number of potential threats.

First are the actual VNF's as their software components provided by a vendor independent of the
infrastructure provider. They can contain software vulnerabilities or even be malware themselves.
Attackers could also take advantage of vulnerabilities present in hypervisors, for example, to undermine
the confidentiality, integrity or availability of VNF resources.

Attackers may even try to eavesdrop on or modify traffic between NVF infrastructure and the NVF
management and orchestration, as well as traffic within the NVF MANO.

Attackers may attempt to disrupt the life cycle management of network services or individual VNF's by
exploiting the orchestrator or VNF manager.

And as a final example, attacking the virtualized infrastructure manager could, for instance allow denial
of service or data theft, bypassing hypervisor isolation.

How are we going to cover these possible attack vectors?

Our first line of defence is VNF isolation, traffic separation, and network zoning. Bees rely on robust
implementations of the hypervisor and the overall cloud platform software. They can be put in place in a
fairly straightforward way using virtual firewalls, switches, local area networks, and wide area private
networks.

Introduction to the separation VNF's may also apply cryptographical protection to traffic exchange with
any communication peer outside the telco cloud.

Lastly, we need to protect the operational and management traffic from all other traffic using protocols
to secure access to management functions. This should be considered to secure the MANO stack in
addition to element managers.

Moving on to software defined networks, they are in general exposed to three main types of attacks,
denial of service man in the middle and network visibility poisoning.

Click on the buttons to see a definition of these attacks, but which parts of the software defined
networks in particular are vulnerable to these attacks?

First, software defined networking switches and the traffic flowing through them are attractive targets
for an attacker.
Communications between the control plane and data plane also make interesting targets, with
objectives like interrupting or eavesdropping on traffic, or stealthily modifying traffic between 2 hosts.

Meantime, controllers bear all the intelligence of the network, making them the most lucrative targets
for attack 2. Topology poisoning attack types affect the controller.

First host location hijacking. It exploits the host tracking service to update the profile of each host in the
network. The second link fabrication exploits the controller's capabilities to create new links, opening the
door to denial of service or man in the middle attacks, among others.

Now that you've seen vulnerabilities that expose software defined networks to attacks, you're probably
wondering how do we protect them?

Just secure SDN's robust authentication and authorization protocols must be implemented for the
controller. This means that access to the controller must be managed by a firewall. We also need resilient
overload control for both the SDN switches and the controller to prevent denial of service.

Cryptographic protection, such as transport layer security should be implemented to help counter men
in the middle attacks.

The firewall around the controller will perform two major roles. It provides defence against external
threats by refusing unauthorized connections to the router, and it also protects the network
infrastructure from within.

Let's now shift the focus on network slicing management. While slicing security is not yet been
standardized, 3GPP is actively adding security elements in the area of network slicing.

As you know by now, use cases with different performance requirements are efficiently supported by
different slices. Here you can see end to end slices supporting use cases like autonomous driving, E
health and the Internet of Things.

The crucial aspect for security and network slicing is isolation. There are two angles to network slice
isolation. The first is resource isolation, which means that resources dedicated to one slice can't be
consumed by another slice, and also that data and traffic cannot be intercepted or faked by entities of
another slice. One key objective is to confine any effects of a potential cyber-attack to a single network
slice, leaving other slices unharmed.
The second angle, security isolation, is less obvious if refers to the fact that separate slices have different
security requirements and therefore may need different protection methods.

In particular, perfect isolation is required in a multi tenant setup where tenants might be competing.
Organizations such as different manufacturers each running their own industrial automation slice.

And security requirements vary for different slices. 5G security must be flexible instead of a one-size fits
all approach. The security setup can be tailored per slice to ensure optimal support for each application.

For instance, we can be flexible about the mechanisms used to identify and authenticate mobile devices
and subscriptions, or to determine the way user traffic is protected. For example, some applications may
rely on security mechanisms offered by the network. These applications may require not only encryption
as the LTE, but also user plane integrity protection. Other applications, however, might use end to end
security on the application layer.

They may opt out of network terminated user plane security because for them, It doesn't provide any
needed additional security and it reduces the energy consumption of mobile devices.

Now that we have a firm grasp of the security standards and enablers for 5G, cloud and VSDM and
network slicing, let's wrap up this section with a few words about security assurance.

Security assurance is a new and emerging topic with 5G development, and it pertains to the certification
of critical security components in 5G networks.

Security assurance builds trust. It's a measure of confidence that the security features, practices,
procedures and architecture of an information system accurately implement and enforce the security
policy. Both 3GPP and the GSM Association have standards for security assurance.

The 3GPP security assurance methodology, or SECAM, aims to provide common and testable baseline
security properties for the different network product classes. The GSM Network Element Security
Assurance scheme or NESAS was officially approved by the GSM Association at the start of October 2019.
Nokia has been a central player in the development of the GSM network equipment Security Assurance
scheme.
Security assurance is a major part of both prevention and compliance. Within Nokia, we are preparing
for the NESSAS process audit as well As for the network equipment of valuations to comply with 3GPP
approved security assurance specifications known as as CA.

Check your knowledge, select the correct answer for each statement.

The final section takes a look at the end-to-end architecture of 5G security, creating a holistic approach.

Be expanding threat landscape and the increasingly sophisticated attacks on mobile networks make truly
end to end and well-integrated network security and necessity advanced persistent threats, for example,
often used several attack vectors to infiltrate their target. They then keep quiet in the system for a while,
either to steal information or to wait until more global attacks are initiated. Hence the name. Persistent.
Obviously, it's no longer enough to protect each part of the network independently instead.

The defence in-depth solution that spans the entire network is essential. Automation built on artificial
intelligence and machine learning bring real time adaptation and efficiency. While edge-based perimeter
control ensures that each part of the network is protected.

Massive amounts of data will be collected from across the network and analysed in real time in the cloud
so that threats can be quickly detected, and preventive actions triggered to ward off even the most
complex attacks as end-to-end 5G network security learns from common attack patterns, it will
continuously optimize its policies to monitor and prevent attacks, ensuring trust of the applications it will
support.

5G is an entirely new type of network, with many more potential entry points for cyber criminals and
because of its dynamic nature, it demands a new approach to security altogether. That approach is called
SOAR, which stands for security orchestration, automation and response.

The SOAR cycle is based on 4 phases, detect, respond, predict and prevent.

To examine this cycle in more detail, we'll begin with prevent. It entails gathering data from the network,
analyzing it to identify trends and accumulate experience, and then using that information to understand
potential risks and nip security issues in the bud.

The next step in the cycle detect involves collecting data from across the network and immediately
applying automatic correlations. This is where machine learning is crucial for early detection. By treating
the entire network as a sensor, SOAR ingests data from existing systems feeding machine learning
analytics and threat intelligence to detect suspicious anomalies at scale and troubleshoot risks rapidly.

The system then responds following an automated workflow based on playbooks. Basically the same
procedures that a security analyst would do manually today. A playbook tells us look, we've detected an
anomaly. Here is the recipe to fix it.

These playbooks will be something customers can leverage from provider expertise and also create or
modify themselves thanks to machine learning and analytics, SOAR will continuously shrink the time
between detecting something and fixing it.

The information gained from any incident goes back in to constantly enrich the overall security
knowledge so we can predict further attacks automatically.

Which of the following statements best describes the SOAR solution?

That's correct. SOAR does not need human interaction. It is the security system that automatically
handles security threats across the network.

Let's recap the most important points covered in this module end to end. 5G will drive more expansive
use cases, which in turn leads to a greater risk of more sophisticated attacks. These attacks can come
from user devices or direct access to the gNode B malicious virtual functions networks like the Internet
or from threat actors working on the inside of the network. 5G security needs a layered and holistic
approach using both edge based perimeter security and automated AI and machine learning security
orchestration.

For efficiency, several standards bodies have protocols and assurance approaches that address many 5G
security concerns. While good practice implementation is closing the gap for edged based perimeters
security, an adaptive security architecture that leverages the attributes of SOAR will help prevent, detect,
respond, and predict attacks in the end to end 5G network.

== M06.Asd ==

ASD and its top controls

- [Narrator] The Standards Associations of Australia and New Zealand have a strong history of
contributing to international standards.

With standards such as ISO 31000 risk management having been born from the "ASNZ 4360 Handbook
of Risk Management."

The Australian Signals Directorate, the Australian counterpart to the National Security Agency, has
published its own standards for security.

The primary reference is the Information Security Manual which lists over 650 security controls.
While the ISM is comprehensive it's much too detailed to be a useful security reference outside of the
military.

In an attempt to make it useful ASD decided in 2010 to publish the ASD Top 35, a manageable set of
security controls which were not quite as extensive as ISO 27000 but were less cumbersome than the
ISMS.

These controls are rated at one of five levels of importance; essential, excellent, very good, good and
limited.

The controls are grouped into those that prevent an incident, those that limit the extent of an incident,
those that detect and respond to incidents, those that recover from incidents and the final control,
personnel management, is to prevent malicious insiders.

Subsequent to publishing this ASD has promoted various subsets of these controls.

The first was the ASD Top Four which ASD claim account for over 85% of all breaches.

It was at this point that there was a level of international interest in the ASD approach.

The Top Four are application whitelisting, patching of applications and operating systems and limiting
administrative privileges.

ASD also promotes the Essential Eight controls which extend the Top Four with macro settings,
application hardening, multifactor authentication and backups.

Unfortunately the ASD controls have two key weaknesses.

They're based on controlling what legitimate users can do rather than how attackers work and they're
not necessarily easy or even practical to apply in a business environment.

Application whitelisting's a powerful control especially on workstations but it's proved to be business
inhibiting and is rarely used.

Even common tasks such as joining a web conference can be made difficult, if not impossible, when
application whitelisting is used.

Furthermore, an attack will often inject malware into a process.

It doesn't need to run as an application so easily avoids this control. Restricting administrative privileges
is a good control but it fails to address the issue of privilege escalation for widespread attack technique.

Why do I need incident management?

- [Instructor] So why do you need incident management anyway?

Well, let's talk about what it is.

An incident is an unplanned interruption or reduction in the quality of a service.

It's an outage or a failure, an unusual operational event that organized tactical action is required to
resolve.
A bug isn't an incident, though a bug may contribute to an incident.

In modern complex systems something is broken or not working as designed pretty much all the time
and our systems have a lot of resilience built in to handle that.

These failures become an incident when human response is needed.

An incident management process is designed to let an organization restore service quickly to affected
customers when a service is down or degraded.

Incident management works differently from normal support cases and development processes
designed to fix product defects because incidents are more time sensitive.

A full incident management process consists of several stages.

Incident response, the subject of this course, covers incident detection, escalation, communication,
diagnosis, and repair.

Basically how to get good at coordinated remediation.

Incident retrospectives, a subject for another time, includes incident analysis, remediation, and
prevention.

If this isn't part of your incident management program you may be firefighting well, but you're going to
keep on firefighting unless you learn to build a more resilient system.

But, incident management process, none of those words sound like fun, am I right?

Why do you need this?

If something breaks just fix it.

Is it really more complicated than that?

I assume I don't need to tell you that the world runs on software.

The most software's in the form of services nowadays and the users don't like it much when those
services are down.

You know what it's like to get agitated when Google, Facebook, or Netflix is down.

Companies lose customers, revenue, and goodwill when it happens.

And systems are more complicated today than they've ever been.

Just reboot it isn't a fix most of the time anymore.

Services have many components and you don't have the luxury of taking them all down to fix the
problem.

Usually you're doing the equivalent of trying to fix a broken airplane while you're flying it.

And the organizations that build and support these services are just as complicated.
Without a well-designed and practiced playbook you can have no one, or not the right people, working
to solve the problem, people working to solve the wrong problem, people stepping all over each other
trying to solve the problem making things worse, or just too many people trying to solve the problem, or,
customers or important stakeholders left in the dark about what's going on.

Even smart, dedicated engineers can be reduced to the techy equivalent of a bunch of kids playing
soccer by all running after the ball.

I've worked in places dealing with the aftermath of catastrophic incidents with no clear plan in place to
deal with them.

There was the month-long accounting system outage that prevented our company from taking in any
revenue, then there was the monitoring product that was down for days for 1000 users.

All applications and systems fail.

They fail when you make changes, they fail when left alone.

There's nothing you can do to completely prevent bugs and failures, but you can have a solid playbook
and have a team that has practiced how to respond skillfully.

And as a result you can have users who are happy with your application and have confidence in your
ability to handle problems when they arise.

Next, let's talk about the techniques that professional emergency responders have developed to deal
with this problem: the Incident Command System.

Managing an SOC

- [Narrator] The security operations center is the epicenter of an organization's cybersecurity program.

The SOC plays a crucial role in collecting and analyzing security information, operating security
infrastructure and coordinating incident response efforts.

Many organizations run their SOCs on a 24/7 basis although smaller organizations may provide after
hour service through an on-call staff member.

Let's talk about a few roles of the SOC.

First, the SOC acts as a central point for the monitoring of security controls.

These include a wide variety of systems, such as firewalls, intrusion detection systems, intrusion
prevention systems, honeypots, vulnerability assessment tools, network security groups, and many other
controls.

This is often done through the centralized console of a security information and event management, or
SIEM tool.

The SIEM collects log entries from a wide variety of security-related systems and stores them centrally.

It also provides analysts with the ability to correlate that information as information spread across
multiple systems may combine to provide valuable threat intelligence.
The SOC also serves as a centralized point of coordination for all cybersecurity activities.

When anyone in the organization suspects a potential security incident, they should use the SOC as their
first point of contact.

The SOC also plays a key role in the organization's cybersecurity professional development program.

SOC analyst roles are often entry level cybersecurity positions, and they can be used as the first step in a
cybersecurity career, feeding more specialized positions throughout the organization as individuals
develop into new roles.

Penetration testing stages

- [Instructor] Let's get into the stages of penetration testing.

Penetration testing is, for the most part, a step-by-step process.

While the systems, tools, and technologies may change from pen test to pen test, the process is typically
broken down into four to six steps, depending on who you ask.

But let me first address the elephant in the room.

You may be asking yourself, what's the difference between penetration testing and ethical hacking?

Well let's get into it.

Ethical hacking requires a hacker to employ a continuous cycle of assessment to determine an


organization's security posture.

They use the same tools, techniques, and approaches as a malicious hacker, but with the goal of
improving organizational security.

Ethical hacking is not usually bounded by scope or rules of engagement, in much of the same way that
malicious actors don't have a scope or rules themselves, they also don't have many of the same time
constraints.

Penetration testing, on the other hand, is a directed type of hacking where the tester has a defined
scope and a goal of assessing organizational vulnerabilities in a specific area.

A pen tester is usually an outside individual, or a team that's been hired to prepare a comprehensive
report outlining the organizational vulnerabilities mitigation strategies.

Their process is very much the same, but you can think of pen testing as a more specific subcategory of
ethical hacking.

The steps to a penetration test are planning, discovery, attack, and reporting.

You may see expanded definitions, but in general, the goals and processes are the same.

The first step to pen testing is called planning.

In the planning phase, the rules of engagement are identified, the goals are set, and management
approval is finalized.
You may be asked what the first step of any penetration test is, and the answer is always written
permission.

It's important that before you start the penetration testing process, that you have written permission.

Verbal permission will not keep you out of jail, but written permission, often called a get out of jail free
card, establishes that you have the legal right to be doing what you're doing.

The next step is called discovery.

Discovery typically involves two parts, scanning and enumeration, and vulnerability analysis.

In the ethical hacking stages, we typically look at this as gaining access, but that gaining access process
actually requires us to analyze vulnerabilities and attempt to exploit them.

That exploitation is called attacking in the penetration testing process.

Executing an attack requires a pen tester to attempt to exploit vulnerabilities of interest.

The goal here being that you want to demonstrate to an organization what an attacker would be able to
accomplish, given the company's current state of their security.

You have to understand your why, and also the business impact.

The attack phase includes gaining access, escalating privileges, system browsing, and installing additional
tools.

Again, the goal here is to determine the extent of the vulnerability, and to be able to accurately assess
the risk to the organization.

In the next couple of videos, we'll take a look at some of the tools and techniques for penetration
testing, but the process described here is always the same.

You should definitely understand the business of the organization so that you can focus on exploiting
things that show the security of the organization.

== M06.Saas ==

Security as a Service

- [Instructor] When cloud first emerged, a major concern was how it could be properly secured.

A decade later it was becoming clear that cloud service providers had stepped up to the challenge and
were delivering security more effectively than was possible with on-premise solutions.

This laid the foundation for security to be delivered as a service, not only on the cloud for cloud
deployments, but also to secure hybrid and on-premise deployments.

There are many security capabilities which can be delivered through managed services.
A key one is Identity as a Service.

Other security services that can be delivered from the cloud include access management, web and email
filtering, and intrusion detection to mention just a few.

To be considered as Security as a Service, rather than just an outsourced security service, these must still
meet the essential characteristics of cloud.

There are five key benefits that come with Security as a Service.

The normal benefits of cloud deployment apply, including reduced capital outlay, elasticity, pay as you
grow, redundancy, and so on.

Cloud providers have highly-skilled security teams with extensive domain knowledge.

Whereas customers can often struggle to maintain a fraction of the capability.

Cloud providers protect multiple clients and have access to a wide range of threat intelligence that's not
generally available.

This ensures more effective security.

Cloud security services can more easily adjust to an evolving set of business requirements than can fixed
on-premise security infrastructure.

Cloud providers will intercept and defeat many attacks as they arrive at the cloud boundary before they
even reach the customer deployments.

And the ability of cloud providers to detect and respond to attacks can be a significant deterrent.

Service Level Agreements (SLA)

- [Instructor] The service-level agreement is the primary way you will manage the work done by your
MSSP.

An SLA is usually written in the form of a contract.

Because it's so important, let's examine the essential elements of an effective SLA.

First, it will list the type of service to be performed.

It will also describe the various performance levels, usually in terms of reliability and responsiveness.

The SLA will also state how the performance levels are supervised, monitored, and reported.

It describes how to report incidents that you have with the service, the incident resolution time frames,
and the consequences for the MSSP of not meeting its commitments, which may include your right to
terminate the contract, or receive a 10% service credit on your next invoice.

The default SLAs are designed to deliver acceptable service to you while limiting MSSP liabilities.

Sometimes, the SLA can be vague.


For example, an outsourced soc may commit to delivering a cyber attack alert to you by email within 10
minutes, but does that include the time they need to verify it?

Or does that 10-minute timer start only after they've spent 15 minutes investigating the alert and
deciding that it's real?

What if they assume it's a false alarm, so you never see it?

But it was actually real.

Now, some very large MSSPs will offer their service on a take it or leave it basis, unless you are also a
very large organization and have a lot of money to spend.

Here's some other advice I can share with you about SLAs.

Be sure to conduct regular reviews of your metrics with the MSSP, and if you can, install an independent
monitoring system.

Finally, be prepared to adjust the SLA over time as your business grows and changes.

== M07.2. learning GDPR ==

When I started my privacy career, there weren't many privacy laws in place, but over the last 20 years
we've seen a rise in the amount of data collected and processed. As a result, Europe has taken the lead
in protecting personal data with the adoption of the General Data Protection Regulation, or GDPR.

My name is Kalinda Raina, and as a privacy practitioner, I'm excited to provide you with an overview of
the GDPR and hopefully a better understanding of its significance in this course, you'll learn what the
GDPR is.

How it came about?

The basic requirements of the law.

And why this law matters to you and your company? Are you ready to dig in and learn more about the
GDPR and it's impact on you? Let's get started.

GDPR. It stands for the general Data Protection Regulation and it is the most comprehensive data privacy
law ever passed. It was designed to strengthen and unify data protection for all individuals within the
European Union. Anyone involved in processing personal data about individuals in EU must comply
whether or not they're located in the EU, the US, or anywhere else in the world. The fines for companies
that fail to comply can range from 2 to 4% of global annual revenue. Just think about that.

For many companies, these fines could reach into the billions.

This broad scope and the potential for immense finds are why the GDPR, unlike privacy laws of the past,
is getting so much attention from global corporations.

The GDPR is not an entirely new law. It was preceded by the EU Data Protection Directive, which was
passed in 1995 and remained in effect from 1998 until May of 2018.
After four years of debate, the EU passed the GDPR in 2016, which expanded the number of data
protection obligations required of companies and strengthen rights, many of which already existed
under the Data Protection Directive. With GDPR protecting and honoring data, subject rights is more
than just a normal Causeway data protection is now directly tied to a company's bottom line.

As a result, data privacy is becoming an area of increased interest and scrutiny for users, the C-Suite and
the boardroom.

GDPR and data protection are something every employee needs to have a working knowledge of.

The GDPR includes nearly 100 different provisions directing companies on how to collect, manage, and
process personal data while outlining key data rights for EU citizens in the world of big data and
interconnected technology, the GDPR provides a way to think about data privacy, which many other
regions including Asia and Latin America, are using as a basis for their own data privacy laws.

It's therefore important to have a working knowledge of the GDPR, as many of its core principles will
likely govern the way even countries outside of the EU think about data protection.

It will also influence customer expectations for data protection globally.

So you might be asking what does the GDPR say? Well, in essence it breaks down to four key concepts. Is
your company lawfully processing personal data? Are you honoring your users data subject rights? Are
you meeting your company's obligations as a data controller or data processor?

And are you designing privacy into your products? In addition to these four tenants, I want to highlight a
few significant changes GDPR will bring about. First, many companies will be required to appoint eight
data protection officer or go. This is an individual who has independent authority to oversee a company's
compliance, which Gdpr. Additionally, the GDPR provides protections for children under 16 by requiring
parental consent before child's personal data can be collected by a company.

If your organization collects or processes the personal data of EU children under 16, this is an issue you
will need to look into the age for parental consent does vary across EU countries and can be set as low as
13.

Another substantive impact is the amount of time that companies have to respond to a data breach.

The GDPR mandates reporting data breaches to an EU regulator within 72 hours. That's just three days of
learning of an incident. This means companies need clear escalation paths to their security and legal
departments when a breach occurs.

So as you can see, there's a lot to consider when dealing with the GDPR.

Privacy by design, is a key concept of the GDPR. While the idea has been around for some time, the
GDPR makes it a legal requirement. Privacy, by design means thinking about data privacy and its
implications when you're developing products, features, even marketing campaigns based on personal
data. It also means encouraging employees to ask themselves questions before collecting or using data.
Questions such as do I need all the data I'm collecting here? Could I do this work without using personal
data at all?

Am I using the data in a way a user may not expect?


And do I have a plan to delete this data once myself or my team no longer need it?

The GDPR also encourages organizations to document key privacy decisions they make around the
collection, use, and storage of personal data. Documenting compliance with the GDPR may be one of the
most challenging and time-consuming aspects of this law. There are several ways an organization can
demonstrate and document compliance. First, your organization may need to complete a privacy review
process of products or features to ensure GDPR compliance before they go live.

This is often referred to as a data protection impact assessment, or DPIA.

DPIA's help document key decisions within an organization that have a privacy impact. Second, your
company should also inventory the personal data it stores and collects. In addition, audit should be
conducted regularly to ensure the mapping of your data flows is always up to date.

3rd, your company should update its existing policies and procedures or if none exists, develop new ones
that outline how personal data will be protected, deleted and processed.

4th provide training to ensure employees understand their role in helping to protect data and honor
customer requests. This is key. Without training, employees will not understand their responsibilities and
meeting the standards of GDPR requires everyone in an organization to be on the same page about data
privacy. Finally, consider setting up a help Center for users so that they know how to easily exercise their
data subject rights. And if you're an online organization, consider updating those settings you offer your
users.

If you remember nothing else, remember the golden rule. Treat the personal data of others with the
same care as you would want your own data treated.

Most privacy issues can be resolved by following the simple rule.

Data subject rights, lawfulness of processing, and the responsibilities of controllers and processors are
key concepts of GDPR, data subject rights or dealers as they are often called, are rights designed to give
individuals greater control over their data.

DSR's are designed to give individuals control over who has access to their personal data and how it is
used.

DSR's are actually a benefit to each of us as individuals.

So let's quickly go over these key rights.

The right to be forgotten, which means individuals can ask companies to delete their data, the right to
access the data a company has about you, the right to portability, which allows individuals to ask
companies to provide their data to another company on your behalf. The right to restriction of
processing which allows an individual to require a company to stop processing their personal data.

The right to rectify or correct data a company may hold about you.

And finally, the right to object to the processing of personal data about you at anytime.

As you can see, these rights are comprehensive and can be a bit hard to keep track of.
The key take away, though, is that these rights are designed to put the individuals who create the data in
the driver seat rather than the companies who use it.

Another key concept is lawfulness of processing. The EU requires companies to have a legal basis for
collecting, using, handling and storing individuals personal data. There are several ways companies can
prove they are lawfully processing data, but the three most common methods are contractual necessity,
consent, and legitimate interests.

Contractual necessity means there's an agreement in place between a company, an individual about the
processing of personal data. This basis applies whenever the collection of data is necessary to fulfill a
contract. So, for example, when you contract with your cell phone company, they must collect your
location in order to provide you with service.

Companies can also rely on consent, but under the GDPR that bar has been raised in the past. Consent
had to be freely given and informed, but now it must also be unambiguous and followed by an
affirmative action.

These enhance requirements will make it really hard for companies to prove they've obtained a
customer's consent. Finally, legitimate interest is another legal basis. It requires companies to balance
the enterprises interests against the rights and freedoms of the individual whose personal data is
collected. So, for example, if you are making an online purchase, a company needs enough information
about you to complete the transaction and prevent fraud.

That company could therefore rely on legitimate interests as a basis for collecting your name, credit card
information and address the companies interest in processing the transaction and preventing fraud
outweighs your interest in protecting the privacy of your name and address.

The final key concept I want to discuss is controllers and processors. Under the GDPR, companies are
broken into two groups, companies that decide how personal data will be processed are controllers. If
you're a processing data at the direction of another entity, you're the processor. Imagine for a moment,
instead of talking about data here, we're talking about money as a controller. You are responsible for
keeping your money safe. Deciding how to spend it, and who to share it with. If you're a processor,
you're like a financial advisor. You're holding the money on behalf of your client, keeping it secure and
only using it the way your client tells you to as a controller. If you don't meet certain obligations set forth
in the GDPR, your company runs the risk of incurring a high fine and even being sued in Europe. If a
processor mishandles data or makes a mistake, a controller can still be held liable for failing to diligently
vet the processor.

So this is why many companies are revising contractual agreements with customers and carefully
reviewing the privacy and security promises of their vendors. If your vendor gets it wrong under the
GDPR, your company may also be on the hook.

As we evolve towards a data-driven economy, the winners will be the ones who embrace GDPR as an
opportunity rather than a burden.

== M07.1.intro ==
Welcome to regulatory compliance, one module in the security practices for CNS curriculum.

This module explains Nokia's privacy obligations and how Nokia is responding to regulatory change and
how that change presents a threat, and particularly a threat to some of our older and less secure
software.

GDPR is specific to Europe, but is likely to become more widespread.

Its citizens are much stronger entitlement to privacy than used to exist, and that exists in much the rest
of the world.

Companies like Nokia need to understand what they can and cannot do, and this module includes a unit
explaining to the GDPR and a look at how Nokia handles privacy.

Critical infrastructure you will have already noticed, has been repeatedly compromised by hackers and
criminals. The impact has been in the billions and it is getting worse.

Western governments also have noticed, and they are reacting in ways that is going to make Nokia's job
more demanding.

Governments are responding to attacks on critical infrastructure and other cyber threats by increasing
the regulation that our customers are subjected to.

The industry introduced NESAS, which is covered in module 9.

The industry introduced NESAS as a response as a voluntary set of regulations that we comply to in the
hope of reducing fragmentation, and this has been somewhat successful.

But the overall direction towards more regulation has continued.

Nokia has a team monitoring regulatory change in over 20 countries and regions.

This is only increasing. (?)

Of these countries and regions, the most concerning for us at the moment recording is the UK.

In the UK, telcos are already not allowed to install equipment from high risk vendors and may still be
required to remove equipment from their networks.

Something about Huawei, but it would be a mistake for us to think that we are immune. We too could be
designated as a High Risk Vendor if the government is not happy about the security of the products we
sell.

This could mean there was some products we cannot sell in the UK.

This could mean that we cannot operate at all in the UK.

But this is bigger than just the UK because other countries are also heading in the same direction.

In the US, the White House just released a strategy paper that will impact us.
Exactly how it will impact us will not be clear until legislation is debated and enacted, which will take
years.

But amongst other things, it appears likely that companies that don't follow best practice can be sued if
their customers have breached.

But there will be protections for companies that do follow best practice.

It's not yet clear what constitutes best practice.

Nokia has responded to this regulatory pressure by launching Project Pegasus.

Which is aiming to make Nokia products more secure in ways that the regulators want.

It's currently a three-year project aligns to deliver some improvements every three months.

Expect to receive updates and we should expect to see changes in processes and in priorities.

The time.

More updates, it said knock dot IT.

Says.

This module contains mandatory units on the GDPR and privacy, including the Privacy Champion
program.

This module contains optional units on Project Pegasus and on the White House National Cybersecurity
strategy.

This module was prepared and read by Ben Aveling, Social Security architect and global champion for the
security of CNS Community of Practice. This module is based in part on material created by John Hickey,
senior security Engineer, product security.

Thank you for taking this module you're making. Nokia and its customers safer.

== M07.Privacy ==

Privacy Champion Privacy Training Module 1 : Nokia Privacy Program

1.1 Welcome

Welcome to the Nokia Privacy Program training, the first of four modules in the NSW Privacy Program
training series.

Prior to taking this training you should have completed the "We Respect Privacy" NokiaEDU training.
This training course contains numerous links to further information on the privacy topic.

You can access all links embedded in this training in the "Resources" tab.

A PDF version of this module is also available from the "Resources" tab.

You can use the menu items on the left and the control buttons along the bottom to navigate through
the training.
1.2 Module Objectives

Data Privacy is a critical concept for the success of Nokia's business strategy.

As a Privacy Champion, you have an important role to play in the implementation of privacy in Nokia and
the creation and support of its products and services.

This is the first module in the Privacy Training for a Privacy Champion.

This module will focus on the business importance of privacy, describe how the Nokia Privacy Program is
structured and the role of the Privacy Champion in that Program.

Later modules will cover the European General Data Protection

Regulation (EU GDPR) that defines privacy compliance requirements for entities collecting and
processing the personal data of EU citizens (Module 2), fundamental data privacy concepts (Module 3),
and an introduction to Privacy Engineering & Assurance, the process we use in Nokia to implement an
important privacy principle, Privacy by Design (PbD).
1.3 Importance of privacy in our business activities

Privacy has a significant importance to the Nokia business.

This includes its importance to how Nokia handles the personal data of its employees, its customers or
when Nokia controls the personal data of consumers.

The fact is that privacy is a fundamental right and freedom in most countries where Nokia does business
and is of growing importance, globally, because of the increasing importance of data to businesses and
the critical nature of its free flow of information.

We also cannot ignore that it is increasingly challenging to comply with industry, local and international
regulations, because society and governments are becoming wary of self-governance by businesses.

Nokia recognizes the importance of privacy and makes commitments to its customers, employees and
business stakeholders.

In addition, Nokia complies with the rule of law and local laws where it does business, including privacy
regulations.

Nokia's employees and customers, as well as consumers, care about their privacy.

This creates a market expectation on Nokia.

Nokia employees need to take account of the data we collect and process on behalf of Nokia.

With that use comes privacy risks, when that data processing includes personal data.

Data stewardship needs to be followed in the way we conduct our work.

A breach of trust in Nokia can take a long time to repair and the costs to remediate or fix any damage
can be considerable.
1.4 Nokia privacy program framework

Nokia has established a comprehensive company-wide privacy program that is based on relevant laws,
best practices, and standards.

The Program is intended to allow Nokia to create products and services for sale to a global marketplace.

The Nokia Privacy Management Policy, which is endorsed by the Nokia CEO, defines the Nokia Privacy
Program.

The Program is a framework for achieving the Nokia's Privacy Vision; "We Respect Privacy".

The program is principles-based, meaning that Nokia has an ethical basis for how it fulfills its privacy
commitment, expressed by the Privacy Vision.

These guiding set of Privacy Principles are aligned with the privacy principles in globally recognized
privacy frameworks and serve as the basis for a common set of Privacy Requirements for our products,
services and operations.

The Program defines a set of privacy related roles with key responsibilities, from the Group Leadership
Team (GLT) accountable for privacy at Nokia, to the Program Office that provides for privacy at a Group
Function level, to the roles in the individual business groups that puts into operation the Nokia Privacy
Program.
1.5 Nokia privacy management policy codifies Nokia privacy program

The Nokia Privacy Program that is codified by the Nokia Privacy Management Policy establishes that
Nokia is not just focused on compliance with privacy regulation requirements, but that Nokia also
intends to be an accountable organization that can, on-demand, demonstrate its compliance to these
regulations.

An accountable organization is focused on a guiding vision.

It is principles based, such that it reflects internationally accepted privacy principles.

The organization has oversight of its privacy program at an executive level.

The Nokia privacy program is just not manifested at the corporate level, by the central Privacy Program
Office, but is a company-wide program, with business group specific ownership for privacy and a privacy
program that is reflected in the business strategy of those individual business groups.

Likewise, key privacy related roles have their counterparts within the business groups.

Should business owners disagree with the privacy findings of a business group's privacy team, there is a
clear escalation model defined by the Nokia Privacy Management Policy for how to resolve those
differences and if compliance with privacy regulations is called into question, there is a Privacy Legal &
Compliance set of legal counsels to provide legal basis for such findings.

And in keeping with the Nokia Privacy Vision, "We Respect Privacy", Nokia intends to cooperate with
Data Protection Supervisory Authorities and is willing to respond to request for information or
cooperation.
1.6 Nokia privacy vision and principles

The Nokia Privacy Principles are the internal Nokia reflection of the various privacy frameworks across
the globe.

These Privacy Principles create a framework for the Nokia Privacy Program.

They are analogous to the role that genetic DNA plays in the human organization.

They manifest privacy across the Nokia businesses.

Within the Design for Security (DFSEC) process, these Privacy Principles have been transformed into 10
Privacy Goals and their respective tasks and requirements that presents the Privacy Principles in a
manner that an engineer can put them into practice through Privacy by Design (PbD).

Click on each of the Nokia Privacy Principles to learn more about it.

Fair & Lawful Processing means that Nokia processes personal data honestly, ethically, with integrity and
always consistent with the applicable laws and our own Nokia values.

Accountable means that Nokia has in place accountable privacy compliance measures and that we
monitor and enforce compliance to the Privacy Principles.

Privacy by Design is a commonly accepted principle for implementing privacy.

It means Nokia puts privacy as a key consideration in the creation, delivery and support of our products
and services.

Transparency means that Nokia is always open about its collection and uses of personal data that it
processes.

Choice & Individual Participation means that when Nokia is the controlling entity of an individuals'
personal data, we provide fair and reasonable choices for such collection and user of their personal data
and allow an individual, where appropriate, to exercise their individual personal data rights.

Collection and Purpose Limitation means that Nokia collects and processes personal data that is
necessary and relevant for the purposes for which it was collected.
Data Management means that Nokia applies reasonable data management practices to govern the
processing of personal data.
Limited Disclosures means that Nokia does not share personal data with law enforcement or other
governmental agencies unless required by law and that we limit sharing of personal data with our
partners to what is described in Nokia's Privacy Statements or to what has been authorized by our
customers.

Security Safeguards means that Nokia implements appropriate technical and organization measures to
protection personal data against unauthorized access, use, tampering or loss and that we similarly
require our partners to apply appropriate privacy and security safeguards.
1.7 Nokia privacy management and governance model

The Nokia Privacy Program has both a centralized and also a distributed governance model.

At a central level, the Nokia CEO, Rajeev Suri, while accountable for the privacy related behavior of
Nokia, has delegated accountability for integration of privacy into Nokia to one of his Group Leadership
Team, currently Barry French, Head of Marketing and Corporate Affairs.

Within this Nokia Group Function, privacy is managed within the Health, Safety, Security, Privacy &
Environment (HSSE) team, headed by Petteri Rantanen.

There, a Privacy Program Office resides, headed by Colette Hanley, who also serves as Nokia Global Data
Protection Officer (DPO).

The Nokia Privacy Program is responsible for the Nokia Privacy Management Policy, annual program
objectives, group wide requirements and processes.

This is the operational elements of the Nokia Privacy Program.

Privacy legal, external regulatory and privacy influence is managed by privacy legal team in Nokia Legal &
Compliance, headed by Nassib Abou-Khalil.

The privacy legal team is headed by Magdalena Góralczyk.

Privacy within business groups and group functions are delegated such responsibilities to executives
within individual business groups and other group functions.

The individual Nokia Business Groups and other Group Functions are required to implement privacy
programs that are based on the Policy, Principles, Roles, Processes, Requirements, Guidelines & Patterns
and Tools defined by the Privacy Program Office.

There is also an array of other cross-team groups that support privacy within Nokia.

Some of these include the security teams in HSSE, Legal & Compliance, Enterprise Risk Management,
Quality, Government Relations, Human Rights, Industry Standardization teams in Bell Labs and Corporate
level External Communications.

For personnel updates, please refer to the Nokia Group Function Privacy Team SharePoint, which is
linked in the resources tab, or on the reference page at the end of this module.
1.8 NSW privacy team

The NSW business group Senior Leadership Team (SLT) has made a significant commitment to Nokia
Privacy Vision, the Nokia Privacy Program.

The NSW head executive, Bhaskar Gorti, has delegated accountable for the business group's Privacy
Program to Ron Haberman, the head of the NSW Technology unit, who is also the NSW CTO.

Frank Dawson, heads the NSW Privacy Program.

He is accountable for implementing and overseeing the business group's Privacy Program.

The NSW Privacy Program has two Privacy Officers, Joy Dion and Jipson Kolenchery.

They provide the program with privacy expertise and support you, as the Privacy Champion within NSW.

Also, Lay-Been Tan is the program manager supporting the NSW Privacy Program.
1.9 Team Privacy Champion (Champ)

Privacy Champion (Champ) is typically a part time privacy resource, from within the product, services,
solutions or care team, who acts as a center of competence to proactively drive Nokia's privacy vision
and requirements into that team.

Privacy Champs main area of responsibility and expertise typically relate to an area other than privacy,
but whose job responsibilities necessitate competency in privacy matters.

As the nominated Privacy Champion in your team, you should review the high-level responsibilities for
this important role.

Even though privacy is not your full-time job responsibility, your role is critical to the NSW Privacy
Program.

It just will not work without your understanding and commitment to these responsibilities.

Whether you are in a NSW product, engineering, solution, services, care team of some other role, you
play a critical part in a successful privacy program in NSW.

Your NSW Privacy Team is looking forward to working alongside you.

Make sure you setup regular, periodic meetings with your NSW Privacy Team, especially, the Privacy
Officers.

A collaborative work relationship is important to our mutual success in implementing Privacy by Design
across NSW.

The role of the Privacy Officer is not to say "No", but instead to "Give You The Know How" to accomplish
your Team's goals by simplifying privacy compliance.

You can start by discussing with your Privacy Officer the processes used by your Team.

Make sure to

highlight the significant roles and their responsibilities within your Team.

Determine with your Privacy

Officer how to interface privacy into that process and what roles within your Team are important in

achieving Nokia's privacy goals.

Create an awareness and training plan for your Team.


Discuss with your Privacy Officer your Team's portfolio, roadmap and schedule, including an upcoming

milestones or decision points.

Educate your Privacy Officer on the key Epics, User Stories or Features of your Team's portfolio

objectives.

Identify with your Privacy Officer the key privacy related regulations or regional requirements

that will impact your Team and its portfolio.

When your Privacy Officer identifies a Group Function privacy team activity or request, determine with
the Privacy Officer what are the delegated activities or actions that you will need to collaborate with
your Privacy Officer on.
1.10 Being a privacy leader in your team

As we reach the end of this NSW Privacy Champion training module, remember to take and action the

knowledge you have just learned.

- Reach out to your Privacy Team and get acquainted.

Schedule regular, periodic meetings or include your Privacy Officer in your Team's networking meetings.

- Remember to contact your Line Manager and update your Success4U targets with appropriate new role
responsibilities of your Privacy Champion role to your annual performance targets.

- Be an advocate for Privacy and encourage your Team members to take the NokiaEDU "We Respect
Privacy' mandatory annual training, if they have not yet completed it, this year.

- Discuss with your Privacy Officer which other roles in your Team have responsibilities that introduce
possible privacy risks and define a training & awareness plan to get them the privacy skills their role
requires.

- Has your team identified its current baseline readiness to comply with all the EU General Data
Protection Regulation (EU GDPR) privacy requirements?

If not, discuss with your Privacy Officer how to conduct a Quick GDPR Readiness Survey to create such a
baseline.

- When you have this EU GDPR readiness baseline, discuss with your Privacy Officer any open gaps and
define a plan to close those gaps.
1.11 Questions to consider

Do you feel you understand what was covered in this chapter?

Take a moment to see if you can answer YES to each of these questions.

When is your next scheduled meeting with your NSW Privacy Team?

Please note: you have to check all boxes to activate the Next button and to proceed with the training!
1.12 Reference material

Here you can find some references to Nokia privacy program information referred to in this training

module.

Nokia Privacy Management Policy

https://nokia.sharepoint.com/sites/policies/Controlled Documents/Policies/Nokia Privacy Management

Policy.pdf?csf=1

Nokia Privacy Engineering & Assurance Process

https://nokia.sharepoint.com/sites/Privacy/_layouts/15/guestaccess.aspx?
guestaccesstoken=no6quudWmp8QtlKqly9BE7uMP+2rGtlfP/
UKj6uaN9s=&docid=2_0593f7da1a55843ac9968d0f2861eca26&rev=1

NSW Privacy Team SharePoint

https://nokia.sharepoint.com/sites/aa_privacy/SitePages/Home.aspx

Nokia Group Function Privacy Team SharePoint

https://nokia.sharepoint.com/sites/HSS/Privacy/
1.13 Thank you

Thank you for following the module "Nokia Privacy Program".

You may now close this window and proceed to the next module "GDPR Compliance".
== M07.1. intro ==

Welcome to regulatory compliance, one module in the security practices for CNS curriculum.

This module explains Nokia's privacy obligations and how Nokia is responding to regulatory change and
how that change presents a threat, and particularly a threat to some of our older and less secure
software.

GDPR is specific to Europe but is likely to become more widespread.

Its citizens are much stronger entitlement to privacy than used to exist, and that exists in much the rest
of the world.

Companies like Nokia need to understand what they can and cannot do, and this module includes a unit
explaining to the GDPR and a look at how Nokia handles privacy.

Critical infrastructure you will have already noticed, has been repeatedly compromised by hackers and
criminals. The impact has been in the billions and it is getting worse.

Western governments also have noticed, and they are reacting in ways that is going to make Nokia's job
more demanding.

Governments are responding to attacks on critical infrastructure and other cyber threats by increasing
the regulation that our customers are subjected to.

The industry introduced NESAS, which is covered in module 9.

The industry introduced NESAS as a response as a voluntary set of regulations that we comply to in the
hope of reducing fragmentation, and this has been somewhat successful.

But the overall direction towards more regulation has continued.

Nokia has a team monitoring regulatory change in over 20 countries and regions.

This is only increasing. (?)

Of these countries and regions, the most concerning for us at the moment recording is the UK.

In the UK, telcos are already not allowed to install equipment from high risk vendors and may still be
required to remove equipment from their networks.

Something about Huawei, but it would be a mistake for us to think that we are immune. We too could be
designated as a High Risk Vendor if the government is not happy about the security of the products we
sell.

This could mean there was some products we cannot sell in the UK.

This could mean that we cannot operate at all in the UK.

But this is bigger than just the UK because other countries are also heading in the same direction.

In the US, the White House just released a strategy paper that will impact us.
Exactly how it will impact us will not be clear until legislation is debated and enacted, which will take
years.

But amongst other things, it appears likely that companies that don't follow best practice can be sued if
their customers have breached.

But there will be protections for companies that do follow best practice.

It's not yet clear what constitutes best practice.

Nokia has responded to this regulatory pressure by launching Project Pegasus.

Which is aiming to make Nokia products more secure in ways that the regulators want.

It's currently a three-year project aligns to deliver some improvements every three months.

Expect to receive updates and we should expect to see changes in processes and in priorities.

The time.

More updates, it said NOK.IT.

Says.

This module contains mandatory units on the GDPR and privacy, including the Privacy Champion
program.

This module contains optional units on Project Pegasus and on the White House National Cybersecurity
strategy.

This module was prepared and read by Ben Aveling, Social Security architect and global champion for the
security of CNS Community of Practice. This module is based in part on material created by John Hickey,
senior security Engineer, product security.

Thank you for taking this module you're making. Nokia and its customers safer.

== PS131-robustness-overview ==

Welcome to the robustness testing and DOS testing overview. It should take you approximately 10
minutes to complete this micro module. You can access the narration script and course navigation from
the menu tab.

This micro module consists of one chapter. In this chapter we will give a high level overview of security
testing as part of the development process, we will also look at how robustness testing and dos testing
compliment the other testing and security testing activities.

This course will address some specific aspects of security testing. Security testing is one of the test areas
that is defined in the integration and verification process. Security testing contains the following testing
activities, security feature testing, port scanning, vulnerability scanning, web application, vulnerability
scanning, robustness testing, and denial of service. Or, as it's also known, DOS testing.

Once security features are tested, we must check that their functionality still works as expected even in
boundary cases and when there is an unexpected input. This means that security features should be
tested in the same way and at the same time as any other feature of the product.

The other areas are feature testing, performance verification, operability, testing, protocol conformance
testing, compatibility testing, and interoperability testing.

Security testing attempts to simulate as closely as possible the steps that an attacker takes when
penetrating a target.

From scanning the available services and checking if there are any known vulnerabilities to testing if the
system can be made unavailable by denial of service attack.

In this training, we will focus on robustness testing and denial of service testing.

Robustness testing and DOS testing are both verifying the robustness and availability of the system
under test.

The robustness of a system is the degree to which a system or component can function correctly in the
presence of invalid inputs or stressful environmental conditions.

A robust system is able to show resiliency against problems such as invalid inputs and stressful
environmental conditions, which may include unanticipated events, unresponsive peers, stress
overloads, and other attacks.

All systems that comprise hardware also need to show resiliency against physical problems such as
equipment damage or loss of power. But these robustness aspects are not in scope here.

The term resiliency means no crashes, no denial of service characteristics, no unexpected or too extreme
degradation of service, and no other abnormal behavior.
When monitoring these characteristics, you should look out for things such as endless loops and high
resource consumption. You should also look out for segmentation faults, buffer overflows, or visible
error messages in the GUI or the log files.

And while some attacks lead inevitably to a certain unexpected level of degradation of service, a resilient
system is also able to recover quickly, log events, perform features, and maintain services as designed.

It is important to note that all of the mentioned problems have one thing in common. The system
architects have made assumptions on how the system operates as well as the environment it is operating
in. Usually because these assumptions are based on the expected or valid input. But these problems can
and do occur in live operation, unintentionally or intentionally.

These problems can be anything from peer network elements or attackers independent hackers,
organized criminals, or intelligence agencies from different nation states.

Consequently, it is very important to perform robustness testing and DOS testing to ensure that the
services provided by the system are robust and secure.

Before looking into robustness testing and DoS testing in more detail, it is important to highlight that the
boundary between robustness testing and DOS testing is not clear and you will notice this when it comes
to the tools that can be used for testing both testing areas address mostly the area of availability and
thus verify that there are no crashes, no unexpected degradation of service, no denial of service, and
that there will be a fast recovery from an attack.

However, when looking at the problems that are addressed, we can notice the fuzziness. Robustness
testing focuses on the 1st 4 problems while DOS testing deals with the last three.

However, this does not mean that testing is duplicated. It means rather that robustness and DOS testing
give attention to a different angle of the same problem. Let's look at a simple example to demonstrate
this.

When establishing a new TCP connection between a client and a server, the TCP 3 way handshake is
performed. However, what happens if the client becomes unresponsive and is not sending the act? Well,
the server should wait for some time and then discard this attempt from the list of half open
connections and this is exactly what his tested in robustness testing. A very simple test case, but a
necessary one.

In DoS testing however, it tests what happens when the server has to handle so many unresponsive peer
elements that all available resources are bound.

Consequently, additional connection attempts by legitimate users are denied. This so-called TCP SYN
flood attack is one of the DOS test cases.

Now let's stop and think for a moment. In this last chapter, you learned what robustness testing was.
Here is your first of two questions. Can you complete the definition of robustness?

Your next question when is a 3 way handshake used? Select one answer and then click submit.

Here are the key learning points from this module. The purpose of robustness testing and DOS testing
are to verify the confidentiality, integrity and availability of the system data and services under test.

This robustness is the degree to which a system or component can function correctly. Resale problems
and the speed of recovery.

Robustness testing, tools, fuzz all operable inputs per related protocol specifications, DOS testing tools,
fuzz particular inputs per known or likely system or component protocol vulnerabilities.

Finally, robustness testing of a system or component protocols per a business prioritization should be
conducted. In addition, DOS testing of system or component services per known or likely vulnerabilities
should also be conducted.

Thanks for your time. You may now close this module.

== PS132-robustness-fuzzing-synopsys ==

Robustness testing PS 00132 W 1120.


Welcome to this micro module on robustness testing.

It should take you approximately 30 minutes to complete this micro module.

There are two chapters in this micro module.

In chapter one you look at robustness testing in more detail.

Then in Chapter 2 you'll take a look at the tools used in robustness testing.

So what is involved in robustness testing and what problems are addressed by it? Let's take a look.

The robustness of a system is the degree to which a system or component can function correctly in the
presence of invalid inputs or stressful environmental conditions.

Robustness testing aims at addressing all the services provided by system that are reachable from the
outside on either user, plane control plane or management plane.

Through these services, the system is communicating with peer network elements, management
systems or user devices.

However, these communication partners could be misbehaving either unintentionally, as in most cases of
misbehaving peer elements, or intentionally like most attackers.

For example, a configuration problem on one network element might result in unanticipated events
being sent to its peer elements, or it could result in unresponsiveness when contact is attempted.

Neither of these examples should cause an unexpected degradation of service or subsequently lead to a
crash or other unexpected side effects.
Naturally, some degradation of service has to be expected, but these should be clearly defined and the
affected element should recover quickly when the problem is eliminated.

The three main problems addressed by robustness testing are unanticipated events, invalid inputs and
unresponsive neighbouring elements.

They have one thing in common.

Each are caused by anomalies in the communication, either anomalies in the communication protocols,
messages like omitting mandatory parameters exceeding the parameter field size, or using invalid
characters, anomalies in the communication protocol called sequence like out of context messages,
incomplete sequences, reordered sequences and such, or anomalies in other input like malformed file
format.

Lastly, it has to be taken into account that these problems can occur on the application layer protocol,
but also on any other layer in the protocol stack.

This means that robustness testing must address the whole protocol stack.

Robustness testing is done by connecting a test tool directly to the interfaces of the system under test
and then executing the test cases in an automated way.

The test cases are usually generated using fuzzing.

There are different approaches on how to do fuzzing click on the symbols to read more about each.

A model based approach uses a model of the protocol or file format as input to generate the test cases.

Therefore the protocol messages as well as the protocol sequences can be taken into account.

Moreover, protocol knowledge is required to allow testing of stateful protocols.


A template based approach uses traffic captures as templates and the generated test case.

This is vary the content of the traffic captures.

However, without understanding the structure of the content, many anomalies are not effective test
cases.

For example, the fuzzer needs to dissect the test at protocol layer from the underlying layers.

Otherwise the protocol testing becomes erratic.

Even if the anomaly generation is able to focus on the upper layer, a lack of protocol understanding can
result in ineffective test cases.

For example, if the checksum is not updated dynamically when varying parts of the massage.

Similar problems may arise with stateful protocols, as malformed packets might result in different replies
than those expected by the traffic capture.

Consequently, template based fuzzing should only be used for simple request response protocols or file
format testing.

Otherwise a model based fuzzer should be preferred.

Independent of the used approach, the variation of content to introduce anomalies can be done in two
ways.

The first is using anomaly libraries which are collection of inputs known to trigger vulnerabilities.

For example, if integer values are allowed inputs like zero negative value or a very big number can trigger
vulnerabilities.
When incorporating protocol knowledge, it is also possible to consider the allowed input set test
boundary and out of bound conditions.

However, also using strings instead of integers or vice versa are important test cases.

Another form of creating malformed input is using random data, but this may not necessarily be the
most effective way to find problems.

A problem of fuzzing is that it results in an infinite set of test cases, each containing a different
combination of malformed input.

However, a meaningful testing time is finite, and so a good fuzzing tool needs to be skillful at prioritizing
test cases that are most likely to find bugs.

A typical test case for protocol robustness testing contains a sequence of messages, including messages
from the test tool and replies from the system under test.

It does this until finally reaching the malformed protocol message or the anomaly in the protocol
sequence.

The test tool needs to play along and react appropriately to the replies from the system under test, but
then also analyze the reaction on the malformed input.

Moreover, the test tool needs to provide mechanisms to observe how the system under test tolerates
each test case.

It also needs to provide mechanisms to observe the fast stream of test cases.

This monitoring is independent of the reaction of the system.

As for some protocols, no reaction or reply might be fine.


It's focus is to check that there is no degradation of service, no crashes or reboots, no error messages in
the log files that could indicate a problem, and the overall health of the system.

For example, regarding resource consumption is normal.

So in practice, observing the system under test means to perform different actions.

This is often called instrumentation.

Firstly, this means to monitor the log files during the test cases for any messages that indicate problems
or error conditions.

Also, the console output should be checked and if supported by the system, this could even mean to
retrieve and, examine vital signs or statistical values via SNMP.

E Secondly, this means to play simple test procedures directly after each test case that can check if the
system still behaves OK, functionality and timing wise.

In the simplest form, this means to run a pre validated test case with valid input which contains no
anomalies.

Depending on a test tool, different means for observing the health and robustness are included.

Nevertheless, these need to be tailored to comply with the system under test specification to make it the
most effective it can be.

To name just a few examples, this means to determine the baseline for resource consumption to
conclude the allowed delays between requests and to select the log files and resource consumption
statistics to be monitored.

Robustness testing has to be done for each protocol and each interface.
This means that if one interface uses HTTP on the application layer, then the protocols on the transport,
Internet and link layer also need to be robustness tested.

So the robustness test tool for Ethernet, IP, TCP and HTTP is executed during test case execution.

The determined log files and statistics need to be monitored to identify crashes and problems.

For some protocols, there might even be protocol specific log files and statistics that should additionally
be observed.

Looking down at the next interface which provides SSH on the application layer, the lower layers are also
TCP, IP and Ethernet.

If the interface uses the same protocol implementations for the lower layer, which is the case in most
products than there is no need to repeat the robustness tests of these protocols.

Consequently, only the SH protocol implementation needs to be robustness tested.

For complete robustness testing, all interfaces need to be analyzed and the respective protocols need to
be robustness tested.

This will result in many protocols that have to be tested even when considering that the protocol
implementations for some of the lower layers have already been tested in other interfaces and thus
don't need to be tested twice.

We have seen that even for a single product, many protocols need to be robustness tested, so it is
advisable to plan the focus area and sequence.

As a general rule, we should prioritize protocol implementations that are potentially not yet proven to be
robust.
This means to prioritize protocol implementations that are developed in House.

However, it is also important to test rarely used protocol implementations that come from a third party.

For example, some telecommunication protocols and widely used third party originated protocol
implementations.

For example standard TCP and UDP stacks that come with common Linux distributions.

Additionally, we should prioritize interfaces that are reachable from the user plane control plane and
management plane, in that order.

All prioritizations should be done considering the results from the threat and risk analysis and
considering the requirements from customers and regulatory bodies.

For a comprehensive test coverage of a protocol, there may be a significant number of test cases.

There may easily be millions of test cases that are generated using fuzzing.

It is generally encouraged and expected that all available tests will be run, but if this would take several
days or weeks, it is acceptable to decide on a test plan that addresses full coverage in several iterations.

For example, by running different parts of the generated test cases during regression tests that are
executed accompanying the development phase, and one finalizing round for each protocol with default
settings with good sampling of test cases from the entire test space and a time limitation of eight hours
or one day.

As far as we know, at the time of creating this training, such an approach is also in line with robustness
testing requirements from our customers.

However, such compromises should be well planned and weighted against the threats and risks.
Always remember that only one single malformed message from a peer network element or an attacker
can crash a live system, so we have to try hard to find these during testing.

If the tested protocol follows a client server concept, different test cases are required for server side and
client side testing.

For example, a server side test for HTTP targets the web servers implementation of HTTP.

The client side test for HTTP however addresses the browser software.

The fundamental principle of the client server model is that the client initiates the interaction and the
server replies.

The client either requests a service such as an ENTP, or establishes a connection, such as in TCP.

For robustness testing, it makes a significant difference if the server or client side is tested.

In serverside testing the system under test or the server is constantly waiting for requests and the test
tool or declined can initiate the interaction any time.

So the test tool can execute a stream of test cases in client side testing the test tool is the server.

This means that the interaction needs to be initiated by the system under test, which has here the role of
the client.

So basically each time the client contacts the server, a test case can be executed.

In many cases, however, the client only gets active at irregular intervals or with low frequency.

As a consequence, the testing will be very slow and it will take a long time, up to several weeks even to
reach an acceptable test coverage.
To accelerate the testing, the client should be triggered more frequently using some tailored trigger
program that is executed on the system under test.

Client side testing also has to do more thorough monitoring of the system under test to detect any
robustness problems.

The detection methods that can be initiated from the test tool are more limited.

Therefore it server side testing is much more straightforward than client side testing.

Also note that there might be challenges in server side testing.

The most important one can be rate limitation in the tested protocol as these can be misinterpreted as
non responsiveness of the system under test.

Rate limitations are usually introduced for protecting against denial of service attacks.

Here you can see a list of the impacts to consider.

A robustness testing tool, however, is trying to execute the test cases as fast as possible to minimize the
testing time and thus might easily hit the rate limitation.

To avoid this, the delay between two test cases should be adapted to the rate limitation.

Alternatively, the rate limitation could be temporarily adjusted either to be less strict or after careful
consideration of the consequences by disabling the rate limitation during robustness testing.

Now let's stop and think for a moment which of the following problems are addressed by robustness
testing.
Select 2 answers.

Let's have a go at another question.

Can you finish this sentence about the client server model? Select one answer.

OK.

One more question.

Here are the key learning points from this chapter.

Firstly, robustness testing aims at addressing the confidentiality, integrity and availability of the system
data and services under test.

Secondly, robustness testing tools, fuzz all operable inputs per standardized protocols.

Furthermore, robustness testing must be done for each protocol and for each interface.

Lastly, robustness testing of system or component protocols should be conducted according to the
priorities concluded from the threat and risk analysis and other business priorities.

In this second chapter, we look at the tools that you will use in robustness testing.

Let's dive in.

Our recommended tool for robustness testing is synopsis to fence six.

There are test tools or so-called test suites, and these test tools are available for most protocols that are
used in the telco domain, both for server side and client side testing, as well As for file format testing.
These tests suites use model based fuzzing and there was also a traffic capture fuzzer which uses
template based fuzzing.

The latter is particularly useful for proprietary protocols and for protocols for which no other test suite
exists, or ones that Nokia doesn't have a license for.

For a few test suites, special hardware is required and needs to be ordered via synopsis.

For the Bluetooth test suites, a USB Bluetooth dongle needs to be ordered and for the Wi-Fi test suites, a
USB Wi-Fi dongle is required.

The subset of the test suites provided by synopsis is licensed by Nokia, and this so-called bundle is
updated on request.

To get access to the test suites, registration is required and the license is provided via central license
server that is hosted by it.

Helps his defense EXT test suites provide various ways to monitor the health and robustness of the
system under test.

Let's explore a few of them.

Click on the buttons to learn more.

Synopsis Defense EXT test suites provide various ways to monitor the health and robustness of the
system under test.

Let's explore a few of them.

Click on the buttons to learn more.


External instrumentation is synopsis, defensive support to run any script or command for monitoring the
system under test.

It is additionally executed after each test case and the result determines if the test case gets a pass or fail
verdict.

As it is a tailored script, it is possible to monitor any log file or statistics on the system under test.

Note that the same concept can also be used for controlling the system under test, for example to bring
the system under test into a specific condition before or after each test case, or to reboot in case of
failed cases.

SNMP instrumentation can be used to query values from system under test via simple network
management protocol.

If case values are outside a specified range, a warning will be issued, but not all systems under test
support this way of monitoring.

The safeguard feature is an instrumentation method provided with some test suites.

The safeguard checks are able to detect more subtle errors as the analyze the responses from the system
under test after an anomaly has been sent.

For example, this way it can be detected if the system under test is leaking information by dumping
whatever content is in the memory rather than only the requested and allowed information.

For example, the open SSL vulnerability called Heartbleed was found using the safeguard feature.

If safeguard instrumentation is available for a test suite, it uses a collection of relevant checkers.
These checkers are based on common weakness, enumeration types, and the attacks associated with
those cwe types.

Valid case instrumentation is the basic monitoring.

Each test suite provides a few test cases testing that a given feature is working and interoperating invalid
case instrumentation.

One of these test cases is executed after each real test case that contained an anomaly.

If the system is robust, there should be no longer delays than if a valid case is executed in a loop, and if
there are delays or no responses, it indicates a potential robustness problem that needs to be analyzed.

Valid case instrumentation is a very basic form of monitoring the system under test and can be used for
all server side test suites.

However, it cannot be used for client side test suites and for observing irregularities in specific log files or
statistics on the system under test.

Other monitoring needs to be prepared for that.

Note that it is possible to configure the test suites so that the selected instrumentation is not executed
after each test case.

While this may considerably speed up the test execution, it also means that the system under test is
more loosely monitored and failures might not be detected.

When using synopsis defense EXT test suites, the workflow is as follows.

First, configure the test suite, the target and how to monitor the target's health and robustness.
If some of the configuration parameters or possible values are not clear, the built in documentation is
the first level of support.

Next, test that the test suite and the target interoperate.

Interoperation ensures that the basic configuration parameters are properly set and the connection to
the system under test is working.

Simple interoperation test cases are available for that purpose that are only sending valid traffic.

In addition to the built in documentation, the user guide also has comprehensive instructions on how to
overcome possible interoperability problems.

If the interoperation test for selected group of test cases is not OK, the configuration needs to be
adjusted.

If the interoperability was successfully tested, the next step is to select all the test groups that shall be
included, the tested features and anomalies to be applied should be defined in the security testing plan.

Each test suite also provides different test coverage categories.

Default Fool Unlimited, and the possibility to configure the coverage and focus area in detail.

The full set should be selected, but one starting the test execution and letting it run for a few minutes.

The tool provides an estimation on the remaining duration as explained earlier.

Full coverage can also be achieved in an iterative approach during regression testing and final testing.

Condemned focus on a time box testing of a well selected and carefully balanced subset.
Once the 1st results from the test execution are available, they should be analyzed.

It is recommended to do this promptly, ideally while the test execution is still running to ensure that
everything is working OK.

Note that it is also possible to pause and resume once all test cases have been executed.

Failed cases should be reexecuted, preferably as part of their original test group.

Robustness must be analyzed during the first execution to make sure the test cases are not passing
during the re execution as well.

This basically means checking all logs and the monitoring results that were constantly checking the
health and robustness of the system under test.

All failures that are not reproducible may indicate a hidden robustness problem and the failure pattern
should be analyzed and correlated with other activities that were happening at the same time.

Such problems could be, for example, a slowly feeling buffer that suddenly results in a buffer overflow, or
rare cases of interfering, detailed analysis and systematic reexecuted can help to better understand the
failure and to find the root cause of the problem.

If test cases are failing consistently or are repeatedly erratic, the problem needs to be reported in the
fault reporting tool and a detailed analysis has to be triggered.

Additionally, a remediation package should be saved from the RE execution so that all logs from the
failed test case or cases are included.

Finally, the test reports from the test suite, including all logs and any information about the configuration
of the system under test, should be saved and included in the robustness testing report.
Note that a specific template is available similarly As for other security testing reports.

It is also possible to save the active settings and also a test plan inside the test suite.

This helps to reload the test suite with the same configuration.

Nevertheless, all configuration parameters are also included in the test results.

The execution of test cases with synopsis defenses can be automated.

This is possible via command line or via a simple Restful HTTP API.

Also, a Jenkins plugin is available.

Click on the buttons to read more about each approach.

For command line automation there are some pre activities recommended via GUI.

These are verification of the license and installation of the test suite as well as configuring the test suite
and running interoperation test cases.

Once everything is ready, this should be saved as a so-called test plan.

Then using the CLI commands that are described in the test suite documentation, the execution can be
started using either the pre save test plan or the provided parameters.

There are also CLI commands to resume an interrupted test execution and to rerun the test cases of a
previous run.

The resume feature is helpful when continuing after a crash of the system.
The rerun feature is usually handy for rerunning field test cases of a previous run.

It is also possible to trigger a time boxed execution, for example to stop a test suite after one day of
execution.

The command line commands and parameters are contained in the documentation as well as via the CLI
Help Command, test suite specific parameters are also contained in the built in test suite documentation.

Get.

We started with using.

The execution of test cases with synopsis defenses can be automated.

This is possible via command line or via a simple Restful HTTP API.

Also, a Jenkins plugin is available.

Click on the buttons to read more about each approach.

For using the Restful HTTP API synopsis, Defense 6 needs to be started in the HTTP API mode via HTTP
get commands.

It is then possible to configure the test suite to start, pause, stop, or otherwise control test runs and to
monitor the results.

Again.

It is recommended to preconfigure a test plan via the GUI.


The Restful HTTP API commands can easily be executed from a test automation framework and from
simple scripts.

Getting started with using synopsis defensive tools is not very difficult as Nokia has a company wide
license.

Here is some of the basic information on how to get started and how to get support in case of questions
or problems.

Before getting started, each user needs to register to get access to the tool and information about the
license.

Once access has been granted to synopsis download area, it is recommended to consult the defense's
installation guide first.

It helps to overcome the starting problems like installation hardware and software requirements, as well
as licensed configuration.

After installation of the defence Exg UI and the required test suites, the Defense 6 user guide instructs on
how to use the GUI, how to configure the test suites and topics such as how to do troubleshooting.

The context sensitive help provides easily accessible help for the UI and the respective test suite.

To embed defense's into the available test automation, set up the Defense Six Test integration guide can
be consulted.

Also, other references for automation and integration to CI Pipelines are provided.

There are also various trainings available, ranging from dedicated and often recorded training sessions
for Nokia to short training videos generally provided by synopsis.
For the most common topics like installation and best practice tips.

Even after studying the above provided material, there might be questions or problems that need to get
followed up on the main contact for such topics is the synopsis Technical Support.

Each registered user is entitled to get support by e-mail.

While there was a rather comprehensive list of synopsis, defense, EXT test suites available, there might
be protocols that cannot be tested with synopsis Defense 6.

For example, specific protocol variants, features or underlying protocol stacks might not yet be
supported, or interoperability is not successful.

Nokia proprietary protocols are usually not covered and might not be sufficiently tested with the traffic
capture fuzzer suite.

In most cases, there are also no other readily available robustness testing tools for these protocols.

In these cases, packet crafters or general purpose fuzzing tools have to be used.

Packet crafters are small tools for creating packets with custom content and sending them to the system
under test.

Most packet crafters are somewhat general purpose and support several protocols, but there are also
some protocols specific packet crafters.

You've only a few specific case tests need to be generated, such As for testing new suspected
vulnerabilities.

Packet crafters might be a good option.


However, there are no predefined test cases and consequently it requires a significant effort to
implement a sufficient level of robustness testing coverage.

There is also only limited support for stateful protocols.

General purpose fuzzing tools can complement the packet crafters and support additional test case
generation using fuzzing based on the provided protocol information.

However, to achieve a sufficient level of robustness testing coverage, there is still quite some effort
required to summarize in case you need to use tools other than synopsis defense six, you need to have
an understanding of the different packet crafters and general purpose fuzzing tools, and analyze which of
the most suitable for the respective protocol to be robustness tested.

Before starting the implementation, it is essential that you consult an agree the tool with the central
product Security organization.

This agreement is important to keep track of the use security testing tools and will ensure that we can
better share best practices.

If you would like to view more material, please click on the links provided.

Now let's stop and think for a moment.

Read the statements about robustness testing tool.

Tools which are shown below and decide whether they are true or false.

If you would like to view more material, please click on the links provided.

Now let's stop and think for a moment.


Read the statements about robustness testing tools, which are shown below and decide whether they
are true or false.

Here are the key learning points from this micro module.

The tool we recommend for robustus testing is synopsis defense six, though in addition there are also
fuzzing tools such as packet crafters and general purpose fuzzing tools.

The test suites for most protocols use both model and template based testing as well as file format
testing.

There are different ways to monitor the health of the system and its robustness.

During testing.

These include valid case.

Instrumentation, external instrumentation, SNMP instrumentation and safeguard instrumentation.

The execution of the test suites can be automated using the command line interface system and its
robustness.

During testing.

These include valid case instrumentation, external instrumentation, SNMP instrumentation, and
safeguard instrumentation.

The execution of the test suites can be automated using the command line interface or at the Restful
HTTP API or via the Jenkins plugin.

Thanks for your time.


You may now close this module.

== PS133-volumetric-testing ==

PS 00133 W 1120.

Robustness and DoS testing, module 3 of DOS testing

Welcome to this micro module on denial of service testing.

It should take you approximately 25 minutes to complete this micro module.

Let's now come to the final micro module.

In this course denial of service testing.

There are two chapters in this micro module.

In chapter one we start by analyzing the problems that DOS testing focuses on and that explain how DOS
testing shall be performed.

Then, in Chapter 2 you'll take a look at the tools used in DOS and distributed denial of service testing.

So what is involved in DOS testing? Let's take a look.

After this training you will be able to understand DOS problems in products.

Denial of service testing or DOS testing makes a service unavailable by overwhelming the service with
malicious traffic.
It mimics the attacks performed by a single or a group of attackers, and also the rather rare but not
unheard of cases of unintentional attacks from misbehaving or unresponsive peer elements.

Certain level of degradation of service is to be expected. Intolerable DoS conditions include an


unexpectedly high degradation of service, crashes, other unexpected side effects, or no to very slow
recovery when the attack is over.

Those testing could be classified in different ways.

Let's start by comparing the main attack types.

Click on the icons to look at each.

Flooding attacks are performed by a single attacker and flood the target with external communication
requests.

This overload prevents the system under test from responding to legitimate traffic, as the resources such
as bandwidth, processor time, disk space or status tables are fully overloaded from the flooding.

Note that the traffic can be valid or invalid traffic as well as a mixture thereof.

The main characteristic that we look at here is the high volume.

However, in some cases a single machine cannot generate enough load to flood the system under test or
such an attack is prevented by mechanisms in the system under test.

In such cases, the attacker may use a distributed denial of service attack or DDoS with the so-called DDoS
attack, the attacker is sending the communication request from several machines, thereby achieving the
required load.

To bring the system into degradation or denial of service.


Each involve node needs to generate only a moderate volume, but in combination they achieve the
required overload.

Typically this is done using a botnet where the bots are compromised nodes and the attacker
coordinates the DDoS attack.

Another form of DDoS attacks are coordinated attack activities performed by a group of attackers.

This may even happen with the machine owners consent, as has been seen when the hacktivist group
Anonymous called for cyber attacks and people wittingly participated.

However, from the targets point of view, the latter attacks all look the same.

A high number of communication partners sending moderate to high amount of traffic, resulting in stress
overload at the target.

While DoS and DDoS attacks both aim at flooding, the main difference is that in the DDoS case, each
involved attack machine is only required to send a normal looking amount of requests and thus it is very
difficult to identify malicious behavior, whereas flooding attacks from a single machine can be more
easily detected and blocked.

It is difficult to categorize DOS and DDoS attacks, but the following aspects are often listed when specific
DOS or DDoS attacks are described.

The type of attack, identifying it as either flooding or tailored traffic, and the usage of valid or malicious
traffic.

The target of the attack, meaning a network or single network elements.

The targeted resource.


This means particularly CPU, disk space or memory, but also other parameters like generated output,
traffic, number of simultaneous connections, number of parallel threads or processes.

The targeted type of weakness are the weaknesses in the specific implementation.

Or are they weaknesses in the design or architecture?

The targeted location of the address weakness separating here application from OS or platform
weaknesses.

The overall effect and the potential for fast recovery, meaning is the target in a temporary denial of
service condition, while the attack lasts or is the dose condition persistent and requires additional
activities once the attack is over?

A simple example for a persistent dos condition is disk space exhaustion, which requires some clean up
during the recovery phase.

Here we can take a look at a few examples of attacks on communication service providers.

Also, we will name some well known tools that can be used to test how vulnerable our products are to
these attacks.

Let's start with the first DOS attack.

The slow HTTP attack targets the design principle of the HTTP protocol that demands the requests need
to be completely received by the server before they are processed, so the attack tool is opening many
HTTP connections and keeps them open by slowly sending partial requests.

If the web server has too many of such HTTP connections, open the maximum number of concurrent
connections is reached, resulting in a denial of service.
When the first tool for this attack surfaced in 2009, several web servers, including Apache, were easily
affected.

While the slow HTTP attack is a flooding attack targeting the application layer, it only requires rather low
bandwidth, mostly because the additional partial requests are sent slowly, hence the name.

The slow HTTP test tool can be used for creating slow loris and slow HTTP attacks in our DOS testing.

Another flooding attack is the TCP SYN flood attack.

The attack targets implementation problems in TCP 3 way handshake.

It initiates TCP connection sending SYN packets, hence the name, but deliberately interrupts the
establishment so that the TCP service list of connections is filled with the half open connections waiting
for timeout.

Once the list is full and the memory is exhausted, no further connections can be established and further
service is denied to fill the connection list quickly and keep it full.

Flooding is required, but the bandwidth can be adjusted such that the list is just kept full.

Different variants of the attack are possible using either direct attacks, spoofed attacks, or distributed
direct attacks, so the defense for the attack needs to cover all these cases.

Even though the attack has been known since the mid 1990s, it is still worth checking how effective the
selected countermeasures are.

There are also similar attacks targeting UDP, such as the UDP flood attack.

The attack floods the target server by sending UDP packets to random ports, which will make the server
busy handling the packet and sending ICMP destination unreachable reply packets.
Some parts of security testing, namely UDP port scans and robustness testing, mimic the UDP flood
attack and are impaired by the common countermeasure ICMP rate limitation.

The T50(?) tool is well known for supporting TCP SYN flooding and UDP flooding tests.

Finally, let's have a look at the DNS amplification attack, which aims to flood a target with valid but
unrequested traffic.

The attack sends DNS name lookup request to publicly accessible open DNS servers and spoof the source
address to the targets address.

As a consequence, the DNS server replies are sent to the target.

Amplification is achieved as DNS requests are much smaller than common DNS record responses,
especially when requesting to include all possible zone informed.

Mission.

Two factors are interesting in amplification attacks, the bandwidth amplification factor and the packet
amplification factor.

The bandwidth amplification factor describes how much more bandwidth is consumed by the response
compared to the original request.

The packet amplification factor denotes how many more IP packets are required for the answer.

Amplification attacks are known for many different services and for some the amplification factor can be
much higher than for DNS.

To redirect the amplified responses, amplification attacks require spoofing the source address, and so
they typically address UDP based services.
Spoofing does not work with connection oriented protocols such as TCP, as TCP connections would not
process packets without matching state.

Amplification attacks can easily be turned into large scale DDoS attacks when using multiple amplifiers to
attack a single target.

This is usually called distributed reflective DOS attack or drdos.

Look at some real life examples of DDoS attacks.

Click on the icons to reveal more about them.

Telstra customers were unable to access the Internet for several hours after a DoS attack overwhelmed
their domain name servers.

Trend Micro released findings that there has been an escalating global turf war where cyber mercenaries
are outbidding each other to use routers for DDoS attacks.

Interestingly, there was an increase in DDoS attacks during the 2020 COVID-19 pandemic, including
attacks on companies such as Amazon, T-Mobile and Verizon.

A DDoS attack on Amazon Web Services was so large that it used a record setting 2.3 terabytes per
second.

The DoS attacks presented so far have been concentrating on IP networks.

However, there were also telco specific DoS attacks addressing, for example, the GSM network.

As studying them will give valuable insights to analyze the potential attack surface of other parts of the
Network, 2 examples are included here.
Once again, we are only looking at the attacks and don't discuss possible mitigation actions.

Telephony DOS attacks addressed land based and mobile networks and have been reported by the US
Cyber Security and Infrastructure Security Agency.

The attack uses voice and messaging services to flood the network with requests for calls and messages.

Operational or technical deficiencies to control requests, enable the attack to succeed.

As a result, the mobile network may exhaust available services and prevent legitimate communications,
such as emergency services.

Signaling attacks address a variety of attacks on core telco protocols.

Most signaling attacks are developed at the providers level.

This is because both as seven and diameter protocols function within the providers core network and
depend on security defenses implemented at the product or providers level.

Researchers have demonstrated denial of service information leakage and fraudulent account attacks via
insecure provider networking assets.

Before planning DOS and DDoS testing, let's have a look at the test aim.

As stated earlier, it is not possible to prevent all degradation and denial of service when you've been
targeted by determined attackers.

However, the aim of DOS and DDoS testing can be summarized as follows.

To make sure that there are no reboots or crashes to ensure proper recovery after the attack and to
verify that the bar has been raised sufficiently high that launching such attacks requires a lot of effort.
Sufficiently high and requiring a lot of effort are subjective, but the performance specification of the
product system or solution that gets tested provides some guidance on the minimum requirements.

And this also means that stress tests, load tests and performance tests are a good starting point for
planning DOS testing.

Note that the perception of sufficiently high and a lot of effort at present time will likely be lower than
the more demanding perception at a later point during the lifetime of the product.

Advances in technology detection or publication of new vulnerabilities, and the creativity of the attackers
may allow further attacks than considered possible to day.

There are a number of measurements that help testers determine degrees to which services can be
made unavailable.

Concurrent TCP connections, TCP connections per second.

Application transactions per second.

TLS handshake rate.

Layer 2 throughput and average packets per second.

URL response time or time to last byte measured at client.

Time to 1st byte.

Planning DOS and DDoS testing requires more effort than other security testing activities.
Here we provide planning guidance on what to consider and what to use as input.

You should consider it the following the end to end picture and how the product or solution is
embedded into the overall network.

Multivendor deployments, especially how their behavior and input is handled by the system under test.

Attacks from single source or multiple sources to cover DOS as well as DDoS attacks.

Which resources can get exhausted and why? For instance, which input or activity uses which resource
or generates output traffic?

Implementation level and design level weaknesses.

Application as well as OS level weaknesses.

Specific timing and interleaving of traffic on the one hand to find attack points, but on the other hand to
increase the test coverage.

Combinations of valid and malicious traffic.

Traffic volume, particularly traffic volume that exceeds the expected normal traffic volume as indicated in
the performance specification.

How we should analyze modifications to standard protocols, algorithms and designs, particularly to
check for newly introduced design or implementation level weaknesses?

Future technical advances don't rule out difficult cases as they might be very possible in the near future.

Additionally, have a look at vulnerability scanning and robustness testing results.


Particularly check for fixed vulnerabilities that could allow DOS or DDoS attacks.

News about successful attacks with the aim to understand what attackers are currently up to.

Stress load and performance testing plans and results as these provide tools for low generation and for
health observation.

Interoperability testing results as these may provide insights regarding multivendor deployments as well
as threat and risk analysis results, and in particular look at risks which have not been addressed.

It is important to think out-of-the-box.

This is not an activity to tick the box of run the required tool.

This is about preventing reputation damage and costly compensation.

The general approach for DOS and DDoS testing is similar to the setup used by attackers.

The test tool is run on a dedicated test machine.

For most DOS testing cases, plus a network of test machines, potentially even just virtual machines for
DDoS testing.

Take particular note to the hardware requirements of these test machines.

For example, for all flooding tests, they need to be able to send more traffic than the carrier grade
system under test is able to handle.
Also ensure that there are no intermediate network elements between the test machine and the system
under test that are not required for the test set up.

Otherwise, you could accidentally bringing down the whole task lab while you are executing the DOS
tests.

And also for DOS and DDoS testing, it is crucial to observe the health of the system under test.

The resource consumption during and after the attack.

The error messages as well as the logs.

Now let's stop and think for a moment.

True or false.

Dos testing only checks for intentional malicious attacks.

Have a go and select whether you think this is true or false.

Here is your final question in this micro module.

Which of the following are aims of DDoS testing select three and click submit.

Here are the key learning points from this chapter DOS testing tools fuzz particular inputs per known or
likely system vulnerabilities.

Testing mimics the behaviors and intentions of single attackers or groups of attackers, as well as looking
at where non malicious issues may occur.
So the three types of attack it test for are flooding, tailored malicious attacks and non intentional attacks.

These attacks often concentrate on IP networks, but there are also telco specific attacks.

The three aims of DOS and DDoS testing are ensuring confidentiality, integrity and availability of system
data and services, benchmarking system resilience through DOS conditions and reviewing known or
likely system or component DOS vulnerabilities.

Let's now have a look at the tools that can support DOS and DDoS testing.

Unlike in the other areas of security testing, the toolkit for DOS and DDoS testing is larger and more
diverse.

The required tools need to support testing for specific DOS attacks as well as brute force flooding with
high volume, valid and invalid traffic.

However, some of these tools should already be in use in other testing activities.

Tools for specific DOS attacks are manifold and often not much more than a simple script or program, so
most of them are separately available, but the toolkit collection from Kali Linux comprises many of them
already in its Linux distribution.

There are also built-in test cases in the TCP for IP version four, test suite of synopsis Defensics, namely for
sock stress land, SYN flood teardrop and distributed Connect flood.

Tools for flooding with valid traffic comprise the tools that are already used for performance and stress
testing.

For example, load generators.

However, port scanners and looping the valid cases of synopsis, defense, EXT test suites can also
contribute if the respective traffic type is required.
Tools for flooding with invalid traffic are either dedicated tools with malicious traffic or synopsis defense.

EXT test suites.

In the latter case, the configuration should be tuned for flooding by executing the test cases as fast as
possible so that there are no delays between the test cases and disabled instrumentation.

You have now completed this micro module in DOS testing.

If you would like to view more material, please click on the links provided.

Here are the key learning points from this chapter.

In summary, the tools available for DOS and DDoS testing are manifold and some tools should already be
used in other areas of testing.

The required tools need to support testing for specific DOS attacks as well as brute force flooding with
high volume, valid and invalid traffic.

The tools test 3 categories specific DOS attacks, high volume valid traffic and high volume invalid traffic.

Thanks for your time.

You may now close this module.

== M08.secure-code-and-code-checking ==

Welcome to this module on secure coding and automated code checking.


Most real-world attacks on software exploit floors and implementation design or coding.

Some flaws can be found using static application security testing tools known as SAST tools, which is why
Nokia requires the use of test tools.

SAST tools work best with cleanly written code.

Code that works in complicated ways can Causeway as to miss problems.

Humans, too, are better at finding issues when code is clean and there are some things that's asked just
can't find.

For all these reasons, Nokia has secure coding guidelines that developers should follow.

Web applications bring additional challenges.

This module will touch on those two if code isn't cleanly written, then it's hard to tell what it's supposed
to do, and if it's not clear what code is supposed to do, it's hard to be confident that it will prove to be
robust when someone tries to exploit it. Properly.

Structured, consistently formatted, clean, readable and well documented code is vital for security
coding.

Every programming language has its own issues.

Every programming language has better and worse ways of doing things from the security point of view.

Nokia has secure coding guidelines for each of the major languages that we use.

All new code should follow the appropriate guideline and this should be reviewed before any code is
checked in.
To share one example from a coding guideline.

This from the Python guidelines alerts the programmer to an issue with the eval function. It explains
what eval does, how a malicious actor might misuse the function, and it suggests ways to reduce the risk
- not just, but also including not using eval at all.

SAST tools can help detect many design issues and coding mistakes. The usage of SAS tools is mandated
by DFSEC.

These tools look for patterns which match common vulnerabilities.

The patterns are generally heuristic and different tools generally look for different patterns. Every tool
will find some vulnerabilities. No tool will find all vulnerabilities. The product security team recommends
Coverity and Sonar Cube is also available.

Using both may well find vulnerabilities that you would not find if you were to use just one or the other.

Like guidelines, SAS tools are specific to particular languages or set of language sets of languages which
may restrict what you can use.

All the normal concerns of secure programming apply to the development of secure web applications.

In addition, because users have remote access to web applications, we have to assume that potential
attackers will also have remote access to our web applications.

Even if an application is only supposed to be available on the customers internal networks, internal
networks get breached.

We have to assume that our products will be attacked, our products must always be coded to be resilient
to attack.
There were many attacks on web applications that are well known and often work, such as injecting
runnable code where strings are expected.

Or modifying values that users aren't supposed to have control over.

Developers need to be up to date with common weaknesses and exploits relevant to their chosen
technology chain. The web developers. This means being familiar with the OWASP top ten and with
Mitre's common software weaknesses.

One more visit. Nokia learn and search for secure coding. There are thousands of results to choose from.

Want even more? Go to LinkedIn learning and search for secure coding and you'll get even more options.

Thank you for completing this module. You're making the world Nokia and its customers safer. This
module was prepared by Anne Viks, Osec software architect specialist.

Stratos Saroglou, product security manager and read by Ben Aveling, sub security architect and global
champion for the security of CNS Community of Practice.

== M08.secure-coding.free-of-malicious-code ==

Welcome to this module on the free of malicious code certification process.

During software development and deployment, there is always a risk that someone may try to
intentionally insert vulnerabilities or malicious code or backdoors into our software, either directly or via
our supply chain or during deployment.

Our customers are aware of this. Several of our customers ask us to provide what is called a free of
malicious code certificate.

Nokia would typically hire a third party company to provide such a certificate per product release.
For this purpose malicious code is defined as unwanted files or programs that can cause harm to a
computer or compromise data.

We talking about deliberately malicious code. We are not talking about accidentally vulnerable code.
There are other processes for dealing with that.

The guidelines we are given are structured into 5 sections representing the five places, or the five ways
in which malicious code has been known to enter systems.

We will not go through all five of these.

The first section of the recommendations, the design section focuses on 2 areas of risk.

The risk that somebody will import malicious code via the supply chain.

And the risk that someone will build code into which an attacker can exploit can inject a vulnerability.

The second section in the recommendations is code reviews.

Properly executing code reviews reduce the risk that a code might accidentally or maliciously introduce
unwanted changes into our code.

And.

Protects our supply chain against similar incidents.

The third section is the source code repository itself.

The source code repository must be protected from an authorized modifications by controlling who can
access it and by recording the changes made to it.
The 4th section is the build process. It must be kept secure.

But it must also deliver software that is demonstrably secure against unauthorized modifications, for
example software that is signed with the cryptographic key and software that is secured by taking
hashes.

Good luck. Final opportunity.

Is delivery or distribution?

In this part of the recommendation, we are required to check the validity of the software.

So that we can be confident that the software being delivered is the software that was built.

By following the guidelines, we can reduce the risk of malicious code.

And that we can demonstrate to our customers that we have done what is reasonably possible to reduce
the risk of malicious code.

Thank you for completing this module. You're making the world Nokia and its customers safer.

This module was prepared by Anne Vikes, Osec software architect specialist.

Structures saroglou, product security manager and read by Ben Aveling, sub security architect and global
champion for the security of CNS Community of Practice.

== M09.intro ==

Welcome to this module on processes and frameworks at Nokia.


Processes and frameworks are simply how we avoid reinventing the wheel.

Cyber Security is 50 years old, or arguably longer.

There's a lot of knowledge that already exists. Processes and frameworks are a way to capture that
knowledge and thereby avoid having to repeat mistakes already made, instead of working out what we
need to do from scratch, our processes and frameworks give us a list of things to be done. The level of
detail could be anything from here are the steps to be followed, up to here are some things you need to
have thought about.

A lot of processes and frameworks exist, far too many for any of us to understand all of them. They
overlapped. They even contradict each other.

Some of them are more valued by customers than others. Some of them are required by law.

Working out which ones to be followed, working out how best to follow them requires effort.

To avoid repeating that effort on every project, Nokia has created the CREATE and the DFSEC processes.

Following these processes will generate a result that meets the main regulatory requirements
worldwide.

Note that there are different versions of the CREATE and DFSEC process, depending on what part of
Nokia you are in.

They're not identical, but they are similar.

Note also that the CREATE process is built out of other processes, one of which is the DFSEC process.

Likewise, the DFSEC process is also built out of other processes.


In this module, we'll touch on some of those processes for more details. You can always consult the
appropriate documents and the Nokia learning hub.

Processes and frameworks are an aid to thinking, not a replacement for thinking.

That reminds us of what needs to be done, but if we try to follow them blindly, we'll be led astray.

Either someone will do the wrong thing without noticing it, or we'll hit a situation in the instructions
don't cover, and if we're not thinking, we won't stop.

It's important to know what a process says to do, but it's equally important to understand why we are
doing it.

As well as DFSEC and CREATE units in this module will at a high level introduce the NIST cybersecurity
framework known as the NIST CSF.

It'll introduce the ISO 27,000 series and the NIST Special publication series.

We'll also introduce the NESAS standards. Not to be confused with Nessus, the vulnerability scanner.

And STRIDE will be introduced.

As with every module in this certificate, the intent is to introduce concepts to you in the expectation that
where you have a need to do so, you will go further and drill deeper into the concepts that are being
introduced here.

As usual, there's plenty of material to be found in Nokia Learning hub and in LinkedIn. Learning in
SharePoint and in Yammer.

These standards are just a few of the standards that are potentially relevant.
These aren't covered in this module, but have a read through this list and see if there's any that you
think you might need to know about.

Again, you're encouraged to consider Nokia Learning Hub, LinkedIn learning, and you can also consider
consulting the issuing organizations websites because many of these documents are very well explained
by the issuer or on the Internet.

Thank you for completing this module. You are making Nokia safer.

== M09.Nesas ==

Status : Not Activate

Welcome to this module on NESAS, the network equipment Security Assurance scheme.

Mobile networks are critical infrastructure and need to be kept secure.

Individual nations were increasingly regulating mobile network equipment.

This led to a concern that security requirements were diverging, which would have increased cost for
vendors and the network providers, and even could have forced vendors to limit themselves to specific
markets.

The solution was to standardize on a common security assurance scheme.

The network equipment called NESAS, the Network Equipment Security Assurance GSMA(?),

GSMA is a body that represents mobile operators worldwide.

3GPP is a standards body.

It's an umbrella body, meaning that its members are worldwide standard organizations themselves.

And also a range of industry bodies representing mobile to left in the equipment providers.
GSMA and 3GPP got together to define NESAS, in order to remove the motivation for individual
countries to create their own regulatory regimes because of the costs that that would occur if that sort
of fragmentation happened.

NESAS provides a security baseline for

1. the evaluation of network equipment and

2. the evaluation of the processes used to create that equipment.

The way Nokia's incorporated NESAS is through the DFSEC and create processes.

The this is processes have been assessed and that is then fed into the DFSEC and create processes.

That produces something which we use internally to evaluate our own process and to evaluate some of
our products. Not all of our products fall into this, but some do.

In scope for Nokia are what's called the generic SCAS suites(?), which are applicable to all 3GP products.

In addition, there are product classes or product families that have additional spaces.

Here are some examples of how SCAS requirements are covered by Nokia create and design for security
processes.

Each of these processes will create will generate artifacts which need to be captured and recorded in
order to provide evidence of compliance.

Nokia self-assess its own products.

And sometimes gets external auditors to check as office assessment. When that happens, the external
auditor will submit its audit to GSMA, and that audit will be published on the GSMA Internet site.
When evaluating our own products, we have to address the tests defined by the generic and specific
scans.

For example, this test of TMS.

Old tests have a unique name, a stated purpose, possibly preconditions, execution steps and expected
results, and the expected format for the evidence which we have to collect and keep for review.

As we processes our product has can be independently audited and will be reported to GSMA.

The audit reports are public and can be accessed at the GSMA website.

In summary, GSMA and 3GPP jointly defined the NESAS standards which define the security
requirements and tests for mobile network equipment.

Both the equipment and the processes are evaluated to demonstrate that they are meeting the defined
baselines. Evaluations are both internal and external, and the benefit is that we have a uniform security
requirement worldwide and a common understanding of what it means for a GSMA 3GP equipment to
be secure.

Thank you for completing this module.

You're making the world Nokia and its customers safer.

This module was prepared by Annie Vikes, Arek software architect specialist and Stratos Saroglou
product security manager.

It includes public material published by the GSMA and was read by Vinayak Sub security architect and
global champion for the security of the CNS Community of Practice.

== M09.Nist-And-Iso.Brennen ==
- [Instructor] The International Organization for Standardization, or ISO, develops and publishes
international standards for everything ranging from quality management and quality assurance to
information security management.

More to the point, the ISO 27000 family contains 45 separate standards and counting to help
organizations select and implement information security controls.

While ISO 27001 gets all the glory, it's really ISO 27002 that you should dig into.

The latest version of this standard identifies 93 specific security controls grouped into 14 different
domains and those domains are further divided into four themes: organizational, physical, people, and
technological.

As an example, information security policies is one control domain, and within that domain, ISO provides
specific guidance around the policy documentation and the processes to support that documentation
that are necessary for an effective information security program.

You can review a summary of the ISO 27002 standard at iso27001security.com, or you could head on
over to iso.org and purchase a full copy of that document.

The US National Institute of Standards and Technology, NIST, maintains hundreds of special publications
on computer security, cybersecurity, and information technology.

These documents contain extensive detail on the selection, implementation, and management of
information security controls.

The NIST Cybersecurity Framework is very well known in this space, but don't let the name fool you.

The five control categories: identify, protect, detect, respond, and recover are intended to help an
organization assess and understand risk.

When it comes to assessing specific security controls, however, I think you'll find that NIST Special
Publication 800-53 is a little more to your liking.

Similar to ISO 27002, NIST Special Publication 800-53 contains over 1,000 controls distributed among 18
control families, including privacy, which overlaps with information security in certain areas.

800-53 also flags each control as either low impact, moderate impact, or high impact, which provides
you with an additional layer of guidance around which controls are required depending on the baseline
that you are pursuing.

800-53 was written to support the US Federal Information Security Management Act, or FISMA, which
provides security guidelines for federal agencies. Released 2002. 2014 modified and released in 15.

In my experience, as both a consultant and a practitioner, you can enjoy the best of both worlds by
combining both ISO and NIST within your organization.

What I've done in the past is I've used ISO to help me understand how to organize my security program,
and then, I've turned to NIST for the technical details about how to make the most of my security
controls.
== M09.Ps37 ==

Objectives

Upon completion of this module, you should be able to:

Describe the role of threat modelling methods.

Recognize the types of threat modeling methods.

The Role of Threat Modeling

Threat-modeling methods are used to identify:

the system architecture and flow,

the potential attackers and their goals,

the possible attacks and methods, and

the possible mitigations.

Threat modeling must be performed early in the development cycle with different stakeholders so a
large number of threat types can be fixed early, preventing a much more expensive fix down the line.

STRIDE

A commonly used system to identify the threats against ICT is the STRIDE approach. STRIDE has been
developed by Microsoft for its Security Development Lifecycle. The individual letters of this acronym
stand for the following six threat categories.

“S”: When attackers can spoof their identity, for example by using stolen login details of a legitimate user,
then they can perform actions and use services not planned to be available to them. Authentication is
compromised here.

“T”: Tampered data has lost its integrity. This might result in some end-customers being charged for
telecom services they did not use. Integrity is violated here.
“R”: When repudiation was successfully achieved by an attacker then it is not possible to relate critical
actions in a system to the person who triggered them. The English term for the characteristic of a system
to be protected is “NONrepudiation”.

“I”: Information disclosure means that confidential information has been shared with non-authorized
persons. This relates to the “Confidentiality” of the CIA triad.

“D”: The magnitude of potential impacts caused by a successful Denial of Service attack can vary from
temporary inconvenience for administrators of a system to complete system failure possibly impacting
the operations of a whole network. Availability is impacted here.

“E”: When unprivileged users can elevate their privileges by utilizing a local vulnerability, they might be
able to compromise or destroy the whole system. Authorization is compromised here.

Appropriate development, commissioning and operations activities of a product need to ensure that
none of the threats included in STRIDE can materialize.

Lesson 5 - Other Threat Models

There are many other threat models and techniques such as PASTA, CVSS, LINDDUN, Attack Trees, TRIKE,
and VAST.

Threat Analysis within Nokia

At Nokia, the threat analysis is done at the first stage of the design for security process. Design for
security or DFSEC is the proactive part of the Nokia CREATE process for developing secure products and
systems.

The Security Threat and Risk Analysis stage is mandatory and must be conducted for each system and for
each release to identify and evaluate threats and risks.

Security Threat and Risk Analysis Process

The Security Threat and Risk Analysis process has these six steps.
1. The ‘System inventory and definition’ step collects the assets of the system under evaluation which
are critical for information confidentiality, integrity and availability.

Product Architecture, Network description, Specifications

2. The second step is about listing the possible threats for the system under analysis.

Possible attacks and vulnerabilities

3. The ‘Threat evaluation’ step provides the probability of occurrence for each threat identified.

Scale from 1 (Not likely) to 4 (Very likely)

The first three steps are focused on analysing the threats.

4. The ‘Impact evaluation’ step gives the severity of the possible losses or damages from the treats.

Scale from 1 (Annoying) to 4 (Disastrous)

5. The ‘Risk calculation’ step identifies areas which require urgent changes. The risk magnitude depends
on threat probability and impact.

Combine => scale from 1 (Minor) to 16 (Critical) :-/

These two steps are focused on evaluating and calculating the risks.

Monitoring and controlling are a continuous activity during the lifetime of a product. It ensures that
earlier planned controls have been effective and to consider the possible changes that occurred during
system development.

Well done! This is the end of this chapter.


Now you should be able to:

Describe the role of threat modelling methods.

Recognize the types of threat modelling methods.

== PS00039-W-0720 ==

https://nokialearn.csod.com/ui/lms-learning-details/app/course/e48e23b8-6ac9-447f-9008-
6b823eea3694

Upon completion of this module, you should be able to:

Recognize the types of hackers.

Identify the capabilities of the attackers.

Describe the most common security attacks and threats.

Asymmetrical Challenge for Nokia

A cyberattack is an action or series of actions to exploits vulnerability in a cyber system. All attacks have
one thing in common: they take something that exists for valid reasons and misuse it in malicious ways.

The attackers’ key motivators are either financial gain or espionage. These motivators account for more
than 93% of breaches! The remaining 7% of attacks are done for the fun of it, ideology, or grudges. These
attacks tend to be less dangerous.

Nokia has a continuous challenge to always provide sufficiently secure products. We face an unfair battle
with the attackers. We need always to provide secure products by defending against potential threats
and attacks. Meanwhile, the attackers would only need one weakness in the cyber system to launch a
malicious act.

Therefore, we all must have awareness about the potential hackers and the most common attacks. The
next lessons in this micro-learning discuss the hackers’ profiles and capabilities, and the attacks that they
can do.

Hackers: Types & Capabilities


When it comes to hackers, there are three main types, and they are Black Hat, White Hat and Gray Hat
hackers.

The Black Hat hackers are the most dangerous type of hackers because they are experienced and have
malicious intent. They try to steal valuable information, manipulate the system, or bring the system
down for money or other illegal reasons.

The White Hat hackers are experienced and ethical. This type of hackers is employed by an organization
to find vulnerabilities in the existing system. These vulnerabilities are then reported to be fixed. The
White Hat is a defensive type of hacker.

The Gray Hat hackers are all about challenging themselves. They look for security holes in a system and
try to exploit them without authorization. The Gray Hat hackers do these attacks just for the fun of it to
see if they can do it. Despite what has been said, the actions of this hacker are illegal.

There are other types of hackers such as Red hat, Blue hat, Hacktivists, and Script Kiddies.

There is a wide range of potential attackers. Their capabilities depend on many parameters.

Here is a list of some basic parameters influencing the capabilities, such as: financial and technical
resources, technical skills, experience, knowledge, endurance, and willingness to take the risk of being
caught.

Without a doubt, “resources” is the most important parameter; somebody having enough cash can
easily set up huge data centers to crack cryptographic algorithms, hire personnel with skills end
experience, buy knowledge about vulnerabilities, and spend as much time for attacking as needed to
succeed.

When thinking of an attacker, many might think of a single individual. However, the reality is different!
Nowadays one can expect that meaningful attacks against ICT systems are usually planned and executed
by groups, which can be well organized and even state-supported with near-unlimited resources.
This leads us to the groups of attackers. This list includes intelligence agencies of the local or foreign
governments, individual hackers, or organized groups of them such as Terrorists, Hacktivists, Industrial
Spies, and Disgruntled Insiders.

Attackers can have different goals; curiosity and fun, the driving force of some individual hackers might
be the least dangerous for our industry nowadays. This kind of hackers usually doesn’t have the
preconditions to be able to pose a severe risk.

The higher threats are coming from attackers who are willing to spend huge resources for reaching their
goals. Those are interested in violating confidentiality to gain information, harm integrity for financial
gain, or cause havoc by denying availability to certain services.

Each of the potential attacker groups possesses different parameters and has different goals. Also, their
parameters and goals might vary depending on the system in question to be protected from them.

When determining the targeted security level for a product, one needs to examine what threats and risks
by the potential attackers. This is done by the “Threat & Risk Analysis”.

Knowledge Check

Complete the below sentences:

The ___BLACK___ hat hacker is highly skilled and makes malicious attacks.

The __WHITE____ hat hacker is ethical and helps organization to protect themselves by finding
vulnerabilities.

The __GRAY____ hat hacker finds security holes illegally just for fun.

Lesson 5 - Common Security Attacks

Common Security Attacks


Now, let us talk about the different types of attacks. We start with the Denial-of-Service, or “DoS”
attacks.

Here, the attacker sends a large number of connections or information requests to a target system to
exhaust its resources. A DoS attack succeeds when authorized users are not able to access the services.
This violates the “Availability” in the CIA triad.

There are various enablers for the Denial of Service attacks.

A lack of Product Security allows attackers to hack into a system and just bring it down from the inside.

Existing vulnerabilities can be used to crash services or the whole system.•Lack of robustness,
especially in protocol implementations makes it easy for remote attackers. Legendary is the “ping of
death” where one single malformed basic Internet Protocol message was enough to bring down a wide
variety of systems.

Exhausting a system’s resources by over-utilizing the service. This is e.g. used when attackers execute
Distributed Denial of Service attacks aiming to bring down servers by sending huge amounts of bogus
requests from thousands of computers organized in botnets.

Countermeasures include adhering to Secure Coding principles, where no risky constructs end up in the
source code and it is ensured that unexpected inputs and events are graciously handled.

Verifying the correctness and robustness of protocol implementations by doing fuzzing as part of the
security testing activities.

A system’s overload limit needs to be known to the operator to protect it from accidental Denial of
Service. The behavior when encountering overload situations needs to be deterministic.

Because availability is a very important KPI for network operators, it is crucial for us that our products
are very robust and neither crash by accident nor suffer due to intentional Denial of Service attacks.

Knowledge Check
Choose the correct statement

Q1. Which is NOT a countermeasure against DoS attacks?

Q2. What is the possible violation caused by a backdoor attack?

Lesson 6 - Other Threats

Other Threats

Let us now go through the indirect threats that could compromise product security. These indirect
threats are attacks that target people. However, if the targeted people become victims then these
attacks could lead to more evolved breaches.

Step 1

On the top of the list comes the Phishing attacks. A phishing attack is done by sending emails that look
professional and mimic emails coming from trusted sources. These emails are designed to make the
reader perform an action such as clicking a link, or downloading a file, or opening a file to get personal
information.

Another threat is accessing unsafe websites that claim to be offering “free” services such as video
streaming, music, and games. These websites could infect your device with malware, spyware, and other
malicious files.

Step 3

Impersonation is a type of social engineering attack. Hence, this attack directed and tailored to a specific
person or team. The goal of this attack is to gain certain information or gain access to a system by
pretending to be another person. Some examples of the impersonations are:

Getting emails from someone claiming to be a Nokia executive asking for sensitive information, or
someone claiming to be a recruiter asking the victim about sensitive business information, or

someone claiming to be a customer and asks about the details of the latest products.

Step 1

On the top of the list comes the Phishing attacks. A phishing attack is done by sending emails that look
professional and mimic emails coming from trusted sources. These emails are designed to make the
reader perform an action such as clicking a link, or downloading a file, or opening a file to get personal
information.

Step 2

Another threat is accessing unsafe websites that claim to be offering “free” services such as video
streaming, music, and games. These websites could infect your device with malware, spyware, and other
malicious files.

Step 3

Impersonation is a type of social engineering attack. Hence, this attack directed and tailored to a specific
person or team. The goal of this attack is to gain certain information or gain access to a system by
pretending to be another person. Some examples of the impersonations are:

Getting emails from someone claiming to be a Nokia executive asking for sensitive information, or

someone claiming to be a recruiter asking the victim about sensitive business information, or

someone claiming to be a customer and asks about the details of the latest products.

Summary

The best strategy for these threats is to NOT open such emails or untrusted websites and report them to
the Nokia IT Security. Falling a victim for such attacks could lead to leakage of valuable information which
could include the design secrets of Nokia products.
Wrap-up

Well done! This is the end of this chapter.

Now you should be able to:

Recognize the types of hackers.

Identify the capabilities of the attackers.

Describe the most common security attacks and threats.

== M09.tcm ==

Welcome to this module on the Technical compliance management process.

Seeing this create, which is based on the Nokia create process, is used to bring products and they're
releases to general availability. The Technical Compliance Management process known as the TCM and
its subprocesses describe the activities necessary for the Create process to fulfill its part of meeting the
following regulatory compliance requirements.

The US National security agreement, known as US NSA.

The China cybersecurity law, known as CSL.

And the European General Data Protection Regulation, known as GDPR.

I see five PM 8.

? ISO5?8?
The process requirement is to have products ready to be offered in all markets, including the USA,
Europe and China.

Which means following the technical compliance management process for the US NSA for the European
GDPR and for China's CSL.

This slide shows a summary of compliance activities as presented in the technical compliance
management process with the mapping from the release perspective to the checklist criteria.

The CAPS workflows used in CNS have shown in blue and requirements that are met prior other criteria
are shown in Gray.

Release security evidence and release test evidence are required for all major releases and must be
updated at least once per year, even if there are no releases during that year.

All products that retain sensitive data or have privacy impact must meet the privacy requirements for
scrambling.

Scrambling is the process of obfuscating or removing sensitive data, also known as anonymizing or
sanitizing.

Log scrambling is required for all releases of all products that handle sensitive or privacy impacting data.

See, five requires the closing of a privacy data scrambling workflow.

The workflow will provide up-to-date test evidence of privacy data scrambling, either new test results
performed for the release, or a clear pointer to the last valid set of results, and a justification of why
those results are still valid.

The secure storage workflow is required to be executed for all releases. See if I've always requires the
opening of the workflow software archive. There's a period of 30 days to do the shot storage and to
close the workflow. Since the software is available at C5, the recommendation is to do the storage
immediately.
As per US NSA, Nokia must meet certain mandatory logging and hardening requirements as imposed on
service providers.

Nokia products must confer also conform to internal status for product security and must be able to
provide evidence that they do conform in case we are required to do so.

Data is pulled from DCT as caps input data. Product security leads should collect the output of the DSET
process and produce a zip file containing the following files. One the security statement of compliance to
the Hardening Specification 3 the privacy data considerations document and for the PSM report which
indicates the status of vulnerabilities known to be in the release.

Artifacts one through 3.

Are created as part of the DFSEC process, the 4th artifact, the PSM report.

Is generated in the vans tool.

As per US NSA, with each major release, or at least annually, Nokia shall perform robustness, functional
performance, regression vulnerability and other testing for its products.

Nokia shall release shall retain the test plans, test designs and test reports and make them available to
the US government on request. Some data is pulled from the DCT assessment test reports as output
from the DFSEC process and P7 or C.

Have to be provided as release test evidence.

The privacy data scrambling workflow is common to the NSA, CSL and the GDPR describes the process
for developing and deploying privacy data scrambling functionality for each release of a product as part
of the DFSEC process. Testing should be planned for each product and repeated each time a new release
may impact the log files or other files used for troubleshooting.

Where the retained privacy or sensitivity or customer sensitive data.


And must address the secure the scrambling rules.

During release creation, the product Security lead identifies all data items to be scrambled.

Daughter items must be described using a regular expressions and it must be recorded where they might
occur, whether that's log files, other ASCII, or binary files, core dumps, or anywhere else.

It's a common privacy requirement to minimize collection and retention of privacy related logs. In
addition, the US requires that customer logs for product supported by NSP China be scrambled.

All China customers must have log scrambling if logs are to be sent outside China.

For EU customers, anonymization or pseudonymization, in some cases of personal data.

Is always required.

Thank you for completing this module. You're making the world Nokia and its customers safer.

This module was prepared by Anne Vikes, Osec software architect specialist.

Stratos Saroglou, product security manager and read by Ben Aveling, sub security architect and global
champion for the security of CNS Community of Practice.

You might also like