0% found this document useful (0 votes)
9 views

CS Oss 5

CYBER SECURITY NOTES

Uploaded by

cjjasmin3
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

CS Oss 5

CYBER SECURITY NOTES

Uploaded by

cjjasmin3
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

OPERATING SYSTEMS AND SECURITY

UNIT 5
5.1 UNIX Security
Unix is an Operating System that is truly the base of all Operating Systems like Ubuntu,
Solaris, POSIX, etc. It was developed in the 1970s by Ken Thompson, Dennis Ritchie,
and others in the AT&T Laboratories. It was originally meant for programmers developing
software rather than non-programmers.
Unix and the C were found by AT&T and distributed to government and academic
institutions, which led to both being ported to a wider variety of machine families than any
other operating system. The main focus that was brought by the developers in this
operating system was the Kernel. Unix was considered to be the heart of the operating
System. The system Structure of Unix OS are as follows:
UNIX is a family of multitasking, multiuser computer operating systems developed in the
mid 1960s at Bell Labs. It was originally developed for mini computers and has since been
ported to various hardware platforms. UNIX has a reputation for stability, security, and
scalability, making it a popular choice for enterprise-level computing.
For those preparing for exams like GATE, a thorough understanding of operating
systems, including Unix, is essential. Our GATE course provides an in-depth exploration
of Unix, covering its history, structure, and key concepts that are crucial for the exam
The basic design philosophy of UNIX is to provide simple, powerful tools that can be
combined to perform complex tasks. It features a command-line interface that allows
users to interact with the system through a series of commands, rather than through a
graphical user interface (GUI).
Some of the key features of UNIX include:
1. Multiuser support: UNIX allows multiple users to simultaneously access the same
system and share resources.
2. Multitasking: UNIX is capable of running multiple processes at the same time.
3. Shell scripting: UNIX provides a powerful scripting language that allows users to
automate tasks.
4. Security: UNIX has a robust security model that includes file permissions, user
accounts, and network security features.
5. Portability: UNIX can run on a wide variety of hardware platforms, from small
embedded systems to large mainframe computers.
6. Communication: UNIX supports communication methods using the write command,
mail command, etc.
7. Process Tracking: UNIX maintains a record of the jobs that the user creates. This
function improves system performance by monitoring CPU usage. It also allows you to
keep track of how much disk space each user uses, and the use that information to
regulate disk space.
Today, UNIX is widely used in enterprise-level computing, scientific research, and web
servers. Many modern operating systems, including Linux and macOS, are based on
UNIX or its variants.
Figure – system structure
 Layer-1: Hardware: It consists of all hardware related information.
 Layer-2: Kernel: This is the core of the Operating System. It is a software that acts as
the interface between the hardware and the software. Most of the tasks like memory
management, file management, network management, process management, etc., are
done by the kernel.
 Layer-3: Shell commands: This is the interface between the user and the kernel.
Shell is the utility that processes your requests. When you type in a command at the
terminal, the shell interprets the command and calls the program that you want. There
are various commands like cp, mv, cat, grep, id, wc, nroff, a.out and more.
 Layer-4: Application Layer: It is the outermost layer that executes the given external
applications.
Figure – kernel and its block diagram
This diagram shows three levels: user, kernel, and hardware.

 The system call and library interface represent the border between user programs and
the kernel. System calls look like ordinary function calls in C programs. Assembly
language programs may invoke system calls directly without a system call library. The
libraries are linked with the programs at compile time.
 The set of system calls into those that interact with the file subsystem and some
system calls interact with the process control subsystem. The file subsystem manages
files, allocating file space, administering free space, controlling access to files, and
retrieving data for users.
 Processes interact with the file subsystem via a specific set of system calls, such as
open (to open a file for reading or writing), close, read, write, stat (query the attributes
of a file), chown (change the record of who owns the file), and chmod (change the
access permissions of a file).
 The file subsystem accesses file data using a buffering mechanism that regulates data
flow between the kernel and secondary storage devices. The buffering mechanism
interacts with block I/O device drivers to initiate data transfer to and from the kernel.
 Device drivers are the kernel modules that control the operator of peripheral devices.
The file subsystem also interacts directly with “raw” I/O device drivers without the
intervention of the buffering mechanism. Finally, the hardware control is responsible for
handling interrupts and for communicating with the machine. Devices such as disks or
terminals may interrupt the CPU while a process is executing. If so, the kernel may
resume execution of the interrupted process after servicing the interrupt.
 Interrupts are not serviced by special processes but by special functions in the kernel,
called in the context of the currently running process.
Difference between Unix and Linux
Linux is essentially a clone of Unix. But, basic differences are shown below:
Linux Unix

The source code of Linux is freely The source code of Unix is not freely
available to its users available general public

It has graphical user interface along


It only has command line interface
with command line interface

Linux OS is portable, flexible, and


can be executed in different hard Unix OS is not portable
drives

Different versions of Linux OS are


Different version of Unix are AIS,
Ubuntu, Linux Mint, RedHat
HP-UX, BSD, Iris, etc.
Enterprise Linux, Solaris, etc.

The file systems supported by Linux


The file systems supported by Unix
are as follows: xfs, ramfs, vfat,
are as follows: zfs, js, hfx, gps, xfs,
cramfsm, ext3, ext4, ext2, ext1, ufs,
vxfs
autofs, devpts, ntfs

Unix is a proprietary operating


Linux is an open-source operating
system that was originally
system that was first released in
developed by AT&T Bell Labs in the
1991 by Linus Torvalds.
mid 1960s.

The Unix kernel is modular,


The Linux kernel is monolithic, meaning that it is made up of a
meaning that all of its services are collection of independent modules
provided by a single kernel. that can be loaded and unloaded
dynamically.

Unix was originally designed to run


Linux has much broader hardware
on large, expensive mainframe
support than Unix.
computers, while Linux was
Linux Unix

designed to run on commodity


hardware like PCs and servers.

User Interface of Linux is Graphical


User Interface of unix is text-based.
or text-based.

Command Line Interface of Linux is Command Line Interface of unix is


Bash, Zsh, Tcsh. Bourne, Korn, C, Zsh.
Advantages of UNIX:
1. Stability: UNIX is known for its stability and reliability. It can run for long periods of time
without requiring a reboot, which makes it ideal for critical systems that need to run
continuously.
2. Security: UNIX has a robust security model that includes file permissions, user
accounts, and network security features. This makes it a popular choice for systems
that require high levels of security.
3. Scalability: UNIX can be scaled up to handle large workloads and can be used on a
variety of hardware platforms.
4. Flexibility: UNIX is highly customizable and can be configured to suit a wide range of
needs. It can be used for everything from simple desktop systems to complex server
environments.
5. Command-line interface: UNIX’s command-line interface allows for powerful and
efficient interaction with the system.
Disadvantages of UNIX:
1. Complexity: UNIX can be complex and difficult to learn for users who are used to
graphical user interfaces (GUIs).
2. Cost: Some UNIX systems can be expensive, especially when compared to open-
source alternatives like Linux.
3. Lack of standardization: There are many different versions of UNIX, which can make it
difficult to ensure compatibility between different systems.
4. Limited software availability: Some specialized software may not be available for UNIX
systems.
5. Steep learning curve: UNIX requires a certain level of technical knowledge and
expertise, which can make it challenging for novice users.
Conclusion
In this article we discussed Unix which is developed in the 1970s, the foundational
operating system behind Linux and macOS. Emphasizing simplicity and a command-line
interface, Unix’s multitasking, multiuser structure boasts stability, security, and scalability.
It uses a kernel, shell commands, and an application layer. Despite its complexity, Unix
remains a powerful choice for enterprise computing.
5.2 UNIX Protection System
• What does protection mean?

– An access enforcement mechanism that authorizes requests from


subjects to perform operations on objects

– Requests: read, write, etc.

– Subjects: users, processes, etc.

– Objects: files, sockets, etc.

• Protection state: describes the operations that system subjects can perform
on system objects

• UNIX protection state specification

– Subjects: process identities

• Process identities: user id (UID), group id (GID), and a set of


supplementary groups.

– Objects: files

– Access: read, write, execute

– Protection state is specified by an access control list (ACL) associated


with each file

• Each file is associated with:

– An owner UID and an owner GID

• Process with the owner UID privilege can modify the protection
state

– “mode bits” describe the ACL of a file


• {owner bits, group bits, others bits}, where each element
consists of a read bit, a write bit, and an execute bit

e.g., rwxr--r--

Example

-rw-rw-r-- 1 simon faculty 14 Sep 8 03:59 file1

-rw-rw-r-- 1 user1 faculty 14 Sep 8 04:04 file2

-rw-rw-r-- 1 user2 students 14 Sep 8 04:04 file3

“simon” belongs to group “faculty”

“user1”, “user2” belong to group “students”

-r-------- 1 simon faculty 14 Sep 8 03:59 file1

----r----- 1 user1 students 14 Sep 8 05:01 file2

-------r-- 1 user2 students 14 Sep 8 05:02 file3

5.3 UNIX Authorization


• If the process UID corresponds to the owner UID of the file, use the mode bits for the owner to
authorize access.

• Else if the process GID or supplementary groups correspond to the file’s group GID, use the
mode bits for the group permissions.

• Otherwise, use the permissions assigned to all others.

You can authorize specific users for administrative access to your security or for OMVS shell
write privileges.

USS Authorization Requirements


To complete this task, you must have the following rights:

 Administrative access to your security package


 OMVS shell write privileges

To authorize a user, you can use one of the following OMVS segments:
 Default OMVS segment
 Specific OMVS segment

Set Up OMVS Segment


This procedure sets up an OMVS segment for each user ID that you want to authorize.

Follow these steps:

1. Assign an OMVS UID number to each user ID. If your security administrator
does not have a policy for assigning OMVS UID numbers, use a unique number.
2. Define the OMVS segment for each user. For a user ID uuuuuuu, UID
number nnn, and user home directory /u/name, enter the following commands:
o For an ACF2 for z/OS system, enter the following commands:

SET PROFILE(USER) DIV(OMVS)


INSERT
uuuuuuu
UID(
nnn
) HOME(/u/
name
) PROGRAM(/bin/sh)

o For a Top Secret for z/OS system, enter the following commands:

TSS ADD(
uuuuuuu
) HOME(/u/
name
) OMVSPGM(/bin/sh) UID(
nnn
)
GROUP(OMVSGRP)

o For a RACF system, enter the following command:

ALU
uuuuuuu
OMVS(UID(
nnn
) HOME(/u/
name
) PROGRAM(/bin/sh))

The OMVS segment must contain a home directory (HOME) and a login shell
(PROGRAM or OMVSPGM).

3. Confirm the contents of the OMVS segment by entering the following commands:
o For an ACF2 for z/OS system, enter the following commands:

SET PROFILE(USER) DIV(OMVS)


LIST
uuuuuu

o For a Top Secret for z/OS system, enter the following command:

TSS LIS(
uuuuuu
) DATA(ALL)

o For a RACF system, enter the following command:

LISTUSER
uuuuuu
OMVS NORACF

4. Define a home directory to each user ID, and ensure that the UID has at least
READ access to it.

For example, to set up the directory, /u/name for UID(nnn), issue the following
commands in the OMVS UNIX shell:

mkdir /u/
name

chown
nnn
/u/
name

chmod 777 /u/


name

5. Confirm the owner and access to the directory by using the following command:

ls -ld /u/
name

The following result appears:


drwxrwxrwx 2
user

group
8192 Sep 31 14:58 /u/
name

5.4 UNIX Security Analysis


When performing a forensic analysis of Unix file systems, there are features that should be inspected
beyond just the file date-time stamps. As shown in Figure 6.3, the last time a Unix file system was
mounted is recorded, which should be consistent with other activities on the system.
File permissions can also be revealing in a case, enabling digital investigators to determine which user
account was used to create a particular file, or which group of users had access to data of concern.
Because Unix systems break portions into the disk into block groups, data of similar types are often stored
in the same area of the disk, facilitating more efficient and focused searching for deleted data. For
instance, efforts to recover deleted logs can focus on the /var/logs partition.
Unix systems in an Enterprise are often configured with some remote storage locations, which can be seen
in /etc/fstab as shown here:
/dev/hda1 / ext2 defaults 1 1
/dev/hda7 /tmp ext2 defaults 1 2
/dev/hda5 /usr ext2 defaults 1 2
/dev/hda6 /var ext2 defaults 1 2
/dev/hda8 swap swap defaults 0 0
/dev/fd0 /mnt/floppy ext2 user,noauto 0 0
/dev/hdc /mnt/cdrom iso9660 user,noauto,ro 0 0
None /dev/pts devpts gid=5,mode=620 0 0
None /proc proc defaults 0 0
remote-server:/home/accts /home/accts nfs
bg,hard,intr,rsize=8192,wsize=8192
remote-server:/var/spool/mail /var/spool/mail nfs
bg,hard,intr,noac,rsize=8192,wsize=8192

In this instance, all files in user home directories are stored on a remote server named remote-
server along with spooled e-mail. Therefore, preserving just the hard drive of the single system would
not preserve all potentially relevant files associated with the use of that system. It would also be necessary
to preserve files stored on the remote server.
5.5 UNIX Vulnerabilities

CVE-2001-0369

Buffer overflow in lpsched on DGUX version R4.20MU06 and MU02 allows a local attacker to obtain root access via a

long command line argument (non-existent printer name).

CVE-2001-0134

Buffer overflow in cpqlogin.htm in web-enabled agents for various Compaq management software products such as

Insight Manager and Management Agents allows remote attackers to execute arbitrary commands via a long user

name.

CVE-2000-0845

kdebug daemon (kdebugd) in Digital Unix 4.0F allows remote attackers to read arbitrary files by specifying the full file

name in the initialization packet.

CVE-2000-0315

traceroute in NetBSD 1.3.3 and Linux systems allows local unprivileged users to modify the source address of the

packets, which could be used in spoofing attacks.

CVE-2000-0314

traceroute in NetBSD 1.3.3 and Linux systems allows local users to flood other systems by providing traceroute with

a large waittime (-w) option, which is not parsed properly and sets the time delay for sending packets to zero.

CVE-1999-1458

Buffer overflow in at program in Digital UNIX 4.0 allows local users to gain root privileges via a long command line

argument.

CVE-1999-1221

dxchpwd in Digital Unix (OSF/1) 3.x allows local users to modify arbitrary files via a symlink attack on the

dxchpwd.log file.
CVE-1999-1210

xterm in Digital UNIX 4.0B *with* patch kit 5 allows local users to overwrite arbitrary files via a symlink attack on a

core dump file, which is created when xterm is called with a DISPLAY environmental variable set to a display that

xterm cannot access.

CVE-1999-1044

Vulnerability in Advanced File System Utility (advfs) in Digital UNIX 4.0 through 4.0d allows local users to gain

privileges.

CVE-1999-0714

Vulnerability in Compaq Tru64 UNIX edauth command.

CVE-1999-0713

The dtlogin program in Compaq Tru64 UNIX allows local users to gain root privileges.

CVE-1999-0691

Buffer overflow in the AddSuLog function of the CDE dtaction utility allows local users to gain root privileges via a

long user name.

CVE-1999-0687

The ToolTalk ttsession daemon uses weak RPC authentication, which allows a remote attacker to execute

commands.

CVE-1999-0513

ICMP messages to broadcast addresses are allowed, allowing for a Smurf attack that can cause a denial of service.

CVE-1999-0513

ICMP messages to broadcast addresses are allowed, allowing for a Smurf attack that can cause a denial of service.
CVE-1999-0358

Digital Unix 4.0 has a buffer overflow in the inc program of the mh package.

CVE-1999-0073

Telnet allows a remote client to specify environment variables including LD_LIBRARY_PATH, allowing an attacker to

bypass the normal system libraries and gain root access.

5.6 Windows Security


Microsoft’s Windows operating system (OS) is possibly the most famous OS on Earth, and it is ubiquitous
in the business world. But the Windows OS has also evolved since its first appearance, adding
considerable security capabilities and features.

This article will show a brief history of Windows OS security development and refinement since Windows
1.

Windows’ main weakness


Windows is loved by its users because it offers tremendous application availability. However, this strength
is also its biggest weakness. By allowing an open approach toward applications, Windows also exposes
itself to malware more often than other operating systems.

Windows 1 — Windows 9X (1983-1996)


Windows 1 was the first version of Windows OS and was released on November 20, 1985. This version of
Windows didn’t come with OS security. In fact, Windows 1 through Windows 9x didn’t have OS security
systems. They had rudimentary logon security (which did not store passwords in the OS) and very limited
logging capabilities.

A major limitation to early Windows OS versions (and MS-DOS) was the fact that the file system they used
was File Allocation Table (FAT). While considered a robust file system for early OSes, it was intended for
smaller drives and for simplistic folder hierarchies. FAT used no security measures, meaning it was easy to
access, modify and delete information stored using this system.

Security issues were compounded by the fact that 16-bit Windows OS versions were practically impossible
to update without expanding to 32-bit. The now-ubiquitous and frequent Windows security update didn’t
exist in the early days of Windows. On top of all of that, the early Windows versions didn’t allow multiple
users, so every user of a shared computer used the same login credentials.
Windows NT
This version of Windows OS was a watershed moment for Microsoft in terms of OS security. Windows NT
was the first security-minded Windows OS and used the New Technology File System (NTFS) as a filing
system. NTFS offered considerable improvements over FAT, including:

 NTFS offers longer file and folder names. FAT32 (32-bit) only offers up to 11 characters, while
NTFS offers up to 255
 NTFS offers greater object security by storing file access rights
 NTFS logs every time it writes information (crucial for security audits)
 Encryption

Windows 2000
Windows 2000 introduced Data Protection Application Programming Interface (DPAPI). This built-in
component allowed for asymmetric encryption of private keys.

Windows XP
The next major OS security facelift for Windows OS came with Windows XP in 2001. Sold as Microsoft’s
most secure OS ever, Windows XP ended up becoming the most patched because of its widespread use.
Among the security improvements in Windows XP were:

 AutoPlay: Allows the OS to identify when removable media was inserted


 Improved DPAPI security by using a SHA1 hash of the Master Key Password
 Password Reset Wizard: Uses a password reset disk
 Credential manager: Stores user credentials for user accounts
 Introduction of Windows Security Center: Continually monitors security and services
 Improved encryption capabilities

Windows Vista
Released in 2007, Windows Vista continued the tradition of improving security by introducing the
following new OS security capabilities:

 User Account Control (UAC): This security control feature helps to ensure that unauthorized OS
changes are not made without administrator approval
 Windows Defender: This in-built anti-spyware solution protects the OS from unwanted or rogue
software by blocking it and when needed, removing it
 BitLocker: Windows encryption feature

Windows 7
Windows 7 debuted with the following security improvements:
 Data Execution Prevention (DEP): This security technique marks data pages as non-executable
stopping attackers from injecting code
 Address Space Layout Randomization (ASLR): Randomizes memory addresses, making it harder to
carry out memory-based attacks
 Improved cryptography
 Enhanced BitLocker capabilities

Windows 8
The security changes in Windows 8 were mostly hardware-based. One OS-based security change was the
addition of AppContainer. Enabled by a new integrity level that stops read and write access to higher
integrity items, AppContainer was an improvement over previous versions of Windows that allowed low-
integrity applications to access medium- and high-integrity objects. AppContainer enriched the OS’s
overall security landscape.

Windows 10
Released in 2015, Windows 10 is the latest of the Windows OSs, and comes with:

 Windows Defender Credential Guard: This new Windows Defender capability isolates credentials
and only allows privileged system software to access them, making it harder to attack the OS
 Improved security baseline by enabling svchost.exe

Conclusion
At its start, the Windows OS wasn’t known for security. Starting in Windows NT, with its NTFS file system,
Windows has grown into a reliably secure OS.

Windows does still leave itself open to more attacks by allowing the open development and use of third-
party applications. With every update though, more security decisions are being made default, making
every Windows system more secure.

5.7 Windows Protection System


Security and privacy depend on an operating system that guards your system and information from the
moment it starts up, providing fundamental chip-to-cloud protection. Windows 11 is the most secure
Windows yet with extensive security measures designed to help keep you safe. These measures include
built-in advanced encryption and data protection, robust network and system security, and intelligent
safeguards against ever-evolving threats.

Watch the latest Microsoft Mechanics Windows 11 security video that shows off some of the latest
Windows 11 security technology.

Use the links in the following sections to learn more about the operating system security features and
capabilities in Windows.
System security
Expand table
Feature name Description
Secure Boot Secure Boot and Trusted Boot help to prevent malware and
and Trusted corrupted components from loading when a device starts.
Boot
Secure Boot starts with initial boot-up protection, and then
Trusted Boot picks up the process. Together, Secure Boot and
Trusted Boot help to ensure the system boots up safely and
securely.
Measured boot Measured Boot measures all important code and configuration
settings during the boot of Windows. This includes: the
firmware, boot manager, hypervisor, kernel, secure kernel and
operating system. Measured Boot stores the measurements in
the TPM on the machine, and makes them available in a log
that can be tested remotely to verify the boot state of the client.

The Measured Boot feature provides anti-malware software


with a trusted (resistant to spoofing and tampering) log of all
boot components that started before it. The anti-malware
software can use the log to determine whether components that
ran before it are trustworthy, or if they're infected with
malware. The anti-malware software on the local machine can
send the log to a remote server for evaluation. The remote
server may initiate remediation actions, either by interacting
with software on the client, or through out-of-band
mechanisms, as appropriate.
Device health The Windows device health attestation process supports a zero-
attestation trust paradigm that shifts the focus from static, network-based
service perimeters, to users, assets, and resources. The attestation
process confirms the device, firmware, and boot process are in
a good state and haven't been tampered with before they can
access corporate resources. The determinations are made with
data stored in the TPM, which provides a secure root of trust.
The information is sent to an attestation service, such as Azure
Attestation, to verify the device is in a trusted state. Then, an
MDM tool like Microsoft Intune reviews device health and
connects this information with Microsoft Entra ID for
conditional access.
Windows Microsoft provides a robust set of security settings policies that
security policy IT administrators can use to protect Windows devices and other
settings and resources in their organization.
auditing
Assigned Some desktop devices in an enterprise serve a special purpose.
Access For example, a PC in the lobby that customers use to see your
Feature name Description
product catalog. Or, a PC displaying visual content as a digital
sign. Windows client offers two different locked-down
experiences for public or specialized use: A single-app kiosk
that runs a single Universal Windows Platform (UWP) app in
full screen above the lock screen, or A multi-app kiosk that
runs one or more apps from the desktop.

Kiosk configurations are based on Assigned Access, a feature


in Windows that allows an administrator to manage the user's
experience by limiting the application entry points exposed to
the user.

Virus and threat protection


Expand table
Feature name Description
Microsoft Microsoft Defender Antivirus is a protection solution
Defender included in all versions of Windows. From the moment you
Antivirus boot Windows, Microsoft Defender Antivirus continually
monitors for malware, viruses, and security threats. Updates
are downloaded automatically to help keep your device safe
and protect it from threats. Microsoft Defender Antivirus
includes real-time, behavior-based, and heuristic antivirus
protection.

The combination of always-on content scanning, file and


process behavior monitoring, and other heuristics effectively
prevents security threats. Microsoft Defender Antivirus
continually scans for malware and threats and also detects and
blocks potentially unwanted applications (PUA) which are
applications that are deemed to negatively impact your device
but aren't considered malware.
Local Security Windows has several critical processes to verify a user's
Authority (LSA) identity. Verification processes include Local Security
Protection Authority (LSA), which is responsible for authenticating
users and verifying Windows logins. LSA handles tokens and
credentials such as passwords that are used for single sign-on
to a Microsoft account and Azure services. To help protect
these credentials, additional LSA protection only allows
loading of trusted, signed code and provides significant
protection against Credential theft.

LSA protection is enabled by default on new, enterprise


joined Windows 11 devices with added support for non-UEFI
Feature name Description
lock and policy management controls via MDM and group
policy.
Attack surface Attack surface reduction (ASR) rules help to prevent software
reduction (ASR) behaviors that are often abused to compromise your device or
network. By reducing the number of attack surfaces, you can
reduce the overall vulnerability of your organization.

Administrators can configure specific ASR rules to help block


certain behaviors, such as launching executable files and
scripts that attempt to download or run files, running
obfuscated or otherwise suspicious scripts, performing
behaviors that apps don't usually initiate during normal day-
to-day work.
Tamper Tamper protection is a capability in Microsoft Defender for
protection Endpoint that helps protect certain security settings, such as
settings for MDE virus and threat protection, from being disabled or changed.
During some kinds of cyber attacks, bad actors try to disable
security features on devices. Disabling security features
provides bad actors with easier access to your data, the ability
to install malware, and the ability to exploit your data,
identity, and devices. Tamper protection helps guard against
these types of activities.
Controlled folder You can protect your valuable information in specific folders
access by managing app access to specific folders. Only trusted apps
can access protected folders, which are specified when
controlled folder access is configured. Commonly used
folders, such as those used for documents, pictures,
downloads, are typically included in the list of controlled
folders. Controlled folder access works with a list of trusted
apps. Apps that are included in the list of trusted software
work as expected. Apps that aren't included in the trusted list
are prevented from making any changes to files inside
protected folders.

Controlled folder access helps to protect user's valuable data


from malicious apps and threats, such as ransomware.
Exploit Exploit protection automatically applies several exploit
protection mitigation techniques to operating system processes and apps.
Exploit protection works best with Microsoft Defender for
Endpoint, which gives organizations detailed reporting into
exploit protection events and blocks as part of typical alert
investigation scenarios. You can enable exploit protection on
an individual device, and then use MDM or group policy to
distribute the configuration file to multiple devices. When a
mitigation is encountered on the device, a notification will be
Feature name Description
displayed from the Action Center. You can customize the
notification with your company details and contact
information. You can also enable the rules individually to
customize which techniques the feature monitors.
Microsoft Microsoft Defender SmartScreen protects against phishing,
Defender malware websites and applications, and the downloading of
SmartScreen potentially malicious files. For enhanced phishing protection,
SmartScreen also alerts people when they're entering their
credentials into a potentially risky location. IT can customize
which notifications appear via MDM or group policy. The
protection runs in audit mode by default, giving IT admins
full control to make decisions around policy creation and
enforcement.
Microsoft Microsoft Defender for Endpoint is an enterprise endpoint
Defender for detection and response solution that helps security teams to
Endpoint detect, investigate, and respond to advanced threats.
Organizations can use the rich event data and attack insights
Defender for Endpoint provides to investigate incidents.
Defender for Endpoint brings together the following elements
to provide a more complete picture of security incidents:
endpoint behavioral sensors, cloud security analytics, threat
intelligence and rich response capabilities.

Network security
Expand table
Feature name Description
Transport Layer Transport Layer Security (TLS) is a cryptographic protocol
Security (TLS) designed to provide communications security over a
network. TLS 1.3 is the latest version of the protocol and is
enabled by default in Windows 11. This version eliminates
obsolete cryptographic algorithms, enhances security over
older versions, and aims to encrypt as much of the TLS
handshake as possible. The handshake is more performant
with one fewer round trip per connection on average, and
supports only five strong cipher suites which provide perfect
forward secrecy and less operational risk.
Domain Name Starting in Windows 11, the Windows DNS client supports
System (DNS) DNS over HTTPS (DoH), an encrypted DNS protocol. This
security allows administrators to ensure their devices protect DNS
queries from on-path attackers, whether they're passive
observers logging browsing behavior or active attackers
trying to redirect clients to malicious sites.
Feature name Description
In a zero-trust model where there is no trust placed in a
network boundary, having a secure connection to a trusted
name resolver is required.
Bluetooth pairing The number of Bluetooth devices connected to Windows
and connection continues to increase. Windows supports all standard
protection Bluetooth pairing protocols, including classic and LE Secure
connections, secure simple pairing, and classic and LE
legacy pairing. Windows also implements host based LE
privacy. Windows updates help users stay current with OS
and driver security features in accordance with the Bluetooth
Special Interest Group (SIG), Standard Vulnerability
Reports, and issues beyond those required by the Bluetooth
core industry standards. Microsoft strongly recommends that
users ensure their firmware and/ or software of their
Bluetooth accessories are kept up to date.
WiFi Security Wi-Fi Protected Access (WPA) is a security certification
program designed to secure wireless networks. WPA3 is the
latest version of the certification and provides a more secure
and reliable connection method as compared to WPA2 and
older security protocols. Windows supports three WPA3
modes: WPA3 personal with the Hash-to-Element (H2E)
protocol, WPA3 Enterprise, and WPA3 Enterprise 192-bit
Suite B.

Windows 11 also supports WFA defined WPA3 Enterprise


that includes enhanced Server Cert validation and TLS 1.3
for authentication using EAP-TLS Authentication.
Opportunistic Opportunistic Wireless Encryption (OWE) is a technology
Wireless that allows wireless devices to establish encrypted
Encryption connections to public Wi-Fi hotspots.
(OWE)
Windows Firewall Windows Firewall provides host-based, two-way network
traffic filtering, blocking unauthorized traffic flowing into or
out of the local device based on the types of networks to
which the device is connected. Windows Firewall reduces
the attack surface of a device with rules to restrict or allow
traffic by many properties such as IP addresses, ports, or
program paths. Reducing the attack surface of a device
increases manageability and decreases the likelihood of a
successful attack.

With its integration with Internet Protocol Security (IPsec),


Windows Firewall provides a simple way to enforce
authenticated, end-to-end network communications. It
provides scalable, tiered access to trusted network resources,
Feature name Description
helping to enforce integrity of the data, and optionally
helping to protect the confidentiality of the data. Windows
Firewall is a host-based firewall that is included with the
operating system, there's no additional hardware or software
required. Windows Firewall is also designed to complement
existing non-Microsoft network security solutions through a
documented application programming interface (API).
Virtual private The Windows VPN client platform includes built in VPN
network (VPN) protocols, configuration support, a common VPN user
interface, and programming support for custom VPN
protocols. VPN apps are available in the Microsoft Store for
both enterprise and consumer VPNs, including apps for the
most popular enterprise VPN gateways.

In Windows 11, the most commonly used VPN controls are


integrated right into the Quick Actions pane. From the Quick
Actions pane, users can see the status of their VPN, start and
stop the VPN tunnels, and access the Settings app for more
controls.
Always On VPN With Always On VPN, you can create a dedicated VPN
(device tunnel) profile for the device. Unlike User Tunnel, which only
connects after a user logs on to the device, Device Tunnel
allows the VPN to establish connectivity before a user sign-
in. Both Device Tunnel and User Tunnel operate
independently with their VPN profiles, can be connected at
the same time, and can use different authentication methods
and other VPN configuration settings as appropriate.
Direct Access DirectAccess allows connectivity for remote users to
organization network resources without the need for
traditional Virtual Private Network (VPN) connections.

With DirectAccess connections, remote devices are always


connected to the organization and there's no need for remote
users to start and stop connections.
Server Message SMB Encryption provides end-to-end encryption of SMB
Block (SMB) file data and protects data from eavesdropping occurrences on
service internal networks. In Windows 11, the SMB protocol has
significant security updates, including AES-256 bits
encryption, accelerated SMB signing, Remote Directory
Memory Access (RDMA) network encryption, and SMB
over QUIC for untrusted networks. Windows 11 introduces
AES-256-GCM and AES-256-CCM cryptographic suites for
SMB 3.1.1 encryption. Windows administrators can mandate
the use of more advanced security or continue to use the
more compatible, and still-safe, AES-128 encryption.
Feature name Description
Server Message SMB Direct (SMB over remote direct memory access) is a
Block Direct storage protocol that enables direct memory-to-memory data
(SMB Direct) transfers between device and storage, with minimal CPU
usage, while using standard RDMA-capable network
adapters.

SMB Direct supports encryption, and now you can operate


with the same safety as traditional TCP and the performance
of RDMA. Previously, enabling SMB encryption disabled
direct data placement, making RDMA as slow as TCP. Now
data is encrypted before placement, leading to relatively
minor performance degradation while adding AES-128 and
AES-256 protected packet privacy.

Encryption and data protection


Expand table
Feature name Description
BitLocker The BitLocker CSP allows an MDM solution, like Microsoft
management Intune, to manage the BitLocker encryption features on
Windows devices. This includes OS volumes, fixed drives and
removeable storage, and recovery key management into
Microsoft Entra ID.
BitLocker BitLocker Drive Encryption is a data protection feature that
enablement integrates with the operating system and addresses the threats of
data theft or exposure from lost, stolen, or inappropriately
decommissioned computers. BitLocker uses AES algorithm in
XTS or CBC mode of operation with 128-bit or 256-bit key
length to encrypt data on the volume. Cloud storage on
Microsoft OneDrive or Azure can be used to save recovery key
content. BitLocker can be managed by any MDM solution such
as Microsoft Intune, using a configuration service provider
(CSP).

BitLocker provides encryption for the OS, fixed data, and


removable data drives leveraging technologies like hardware
security test interface (HSTI), Modern Standby, UEFI Secure
Boot and TPM.
Encrypted Encrypted hard drives are a class of hard drives that are self-
hard drive encrypted at the hardware level and allow for full disk hardware
encryption while being transparent to the device user. These
Feature name Description
drives combine the security and management benefits provided
by BitLocker Drive Encryption with the power of self-
encrypting drives.

By offloading the cryptographic operations to hardware,


encrypted hard drives increase BitLocker performance and
reduce CPU usage and power consumption. Because encrypted
hard drives encrypt data quickly, BitLocker deployment can be
expanded across enterprise devices with little to no impact on
productivity.
Personal data Personal data encryption (PDE) works with BitLocker and
encryption Windows Hello for Business to further protect user documents
(PDE) and other files, including when the device is turned on and
locked. Files are encrypted automatically and seamlessly to
give users more security without interrupting their workflow.

Windows Hello for Business is used to protect the container,


which houses the encryption keys used by PDE. When the user
signs in, the container gets authenticated to release the keys in
the container to decrypt user content.
Email Email encryption enables users to encrypt outgoing email
Encryption messages and attachments, so only intended recipients with a
(S/MIME) digital ID (certificate) can read them. Users can digitally sign a
message, which verifies the identity of the sender and confirms
the message hasn't been tampered with. The encrypted
messages can be sent by a user to other users within their
organization or external contacts if they have proper encryption
certificates.

5.8 Windows Authorization


When we create a web application, we want to expose the application’s users to information. This might
be text, data, documents, multimedia content, and so on. Sometimes, we also need to manage access to
this information, restricting certain users’ access to some of them. This is where authentication and
authorization come in.

Before presenting this Windows account authentication and authorization proposal, I would like to define
what authentication and authorization mean, the difference between the two and how the .NET
Framework manages them. If you are already confident with these concepts you can skip to the next
section.

Authorization is the ability to grant or deny access to resources, according to the rights defined for the
different kinds of entities requesting them.
When dealing with Windows Operating System, and its underlying NTFS file system, authorizations are
managed by assigning to each object (files, registry keys, cryptographic keys and so on) a list of the
permissions granted to each user recognized by the system.

This list is commonly called the “Access Control List” or ACL (the correct name is actually “Discretionary
Access Control List” or DACL, to distinguish it from the “System Access Control List” or SACL). The ACL
is a collection of “Access Control Entries” or ACEs. Each ACE contains the identifier for a specific user
(“Security Identifier” or SID) and the permissions granted to it.

As you probably already know, to view the ACL for a specific file, you right-click the file name, select
Properties and click on the Security tab. You will see something like this:

Figure 1: ACL editor for a demo file.

The “Group or user names” section lists all the users and groups, by name, which have at least one ACE
in the ACL, while the “Permissions” section lists all the permissions associated with a specific group or
user (or, rather, with its SID). You can modify the ACL by pressing the Edit button.

To view the ACL of a specific file using the .NET Framework, you can use the FileSecurity class that you
can find under the System.Security.AccessControl namespace. The following example shows how to
browse the ACL of a file named “C:\resource.txt”:
1 FileSecurity f = File.GetAccessControl(@"c:\resource.txt");
2 AuthorizationRuleCollection acl = f.GetAccessRules(true, true, typeof(NTAccount));
3
4 foreach (FileSystemAccessRule ace in acl)
5 {
6 Console.WriteLine("Identity: " + ace.IdentityReference.ToString());
7 Console.WriteLine("Access Control Type: " + ace.AccessControlType);
8 Console.WriteLine("Permissions: " + ace.FileSystemRights.ToString() + "\n");

By running this code in a console application, you get the following output:

Figure 2: Output of a console application that lists the ACEs of a demo file.

Authorization in ASP.NET Applications


Suppose that we have a file, “resource.txt”, inside the web application root that we want to make available
only to administrators. We can prevent users who aren’t administrators from accessing the file by setting
up its ACL properly. For simplicity, let’s say we want to prevent “CASSANDRA\matteo” accessing it.
Figure 6 shows how to do that:
Figure 6: ACL for the CASSANDRA\matteo user with denied permissions.

We have denied the Read and Read & execute attributes to the CASSANDRA\matteo account, but we
want to see what happens when our demo application tries to open the file. To do so, we add a new
method to it:

1 /// <summary>

2 /// Check if a resource can be loaded.

3 /// </summary>

4 public void CanLoadResource()

5 {

6 FileStream stream = null;

7
8 try

9 {

10 stream = File.OpenRead(Server.MapPath("resource.txt"));

11 WriteToPage("Access to file allowed.");

12 }

13 catch (UnauthorizedAccessException)

14 {

15 WriteException("Access to file denied.");

16 }

17 finally

18 {

19 if (stream != null) stream.Dispose();

20 }

21 }

The CanLoadResource() method tries to open resource.txt, in order to read its content. If the load
succeeds, the “Access to file allowed.” message is written on the page. If
an UnauthorizedAccessException exception is thrown, the message “Access to file denied.” is written on
the page, as an error. The WriteException() method is a helper method used to write an exception
message on the page.

Now we launch our application with authorizations set as in Figure 6 and use “CASSANDRA\matteo” to
log into the application. Doing that, we obtain something that should sound strange:
Figure 7: Logon with user CASSANDRA\matteo with permissions as in Figure 6.

As you can see in the Figure 7, resource.txt can be loaded by the application even if the credentials
provided for the login refer to an account that has no permissions to access it.

This happens because, in this case, the Application Pool associated with the web application works in
Integrated mode, which relates authentication and authorization to different users. Specifically,
authentication involves the user identified by the credentials provided, while authorization involves the
user account used by the Application Pool associated with the application. In our example, the Application
Pool uses the NETWORK SERVICE account, which has permission to access the file.

We’ll try to deny these permissions by modifying the ACL of the resources.txt file:
Figure 8: ACL for the NETWORK SERVICE account with denied permissions.

If we launch our application, we now obtain:


Figure 9: Logon with user CASSANDRA\matteo, still with the permissions in
Figure 8.

As you can see, the file is no longer available, demonstrating that the authorization process involves the
NETWORK SERVICE account.

To use authorization at the authenticated user level, we need to use Impersonation. With impersonation,
we are able to allow the Application Pool to run with the permissions associated with the authenticated
user. Impersonation only works when the Application Pool runs in Classic Mode (in Integrated mode the
web application generates the “500 – Internal Server Error” error). To enable impersonation, we need to
enable the ASP.NET Impersonation feature, as noted in Figure 3 and the discussion that followed it.

If we switch our Application Pool to Classic Mode (enabling the ASP.NET 4.0 ISAPI filters, too) and
enable ASP.NET impersonation, the demo application output becomes:
Figure 10: Logon with user CASSANDRA\matteo, with permissions as in Figure 8
and Application Pool in Classic Mode.

We are now able to load resource.txt even if the NETWORK SERVICE account has no permissions to
access it. This shows that the permissions used were those associated with the authenticated user, not
with the Application Pool’s identity.

To take advantage of Integrated mode without having to abandon impersonation, we can use a different
approach: running our application in Integrated mode and enabling impersonation at the code level when
we need it. To do so, we use the WindowsImpersonationContext class, defined under
the System.Security.Principal namespace. We modify the CanLoadResource() method as follows:

1 /// <summary>

2 /// Check if a resource can be loaded.

3 /// </summary>

4 public void CanLoadResource()

5 {

7 FileStream stream = null;

8 WindowsImpersonationContext imp = null;

10

11 try

12 { IIdentity i = Thread.CurrentPrincipal.Identity;

13

14 imp = ((WindowsIdentity)i).Impersonate();

15

16 stream = File.OpenRead(Server.MapPath("resource.txt"));

17 WriteToPage("Access to file allowed.");

18 }

19 catch (UnauthorizedAccessException)
20 {

21 WriteException("Access to file denied.");

22 }

23 finally

24 {

25 if (imp != null)

26 {

27 imp.Undo();

28 imp.Dispose();

29 }

30

31 if (stream != null) stream.Dispose();

32 }

33 }

With the modification added, we can force the application to impersonate the authenticated user before
opening the file. To achieve this, we have used the Impersonate() method of the WindowsIdentity class
(the class to which the Identity property belongs). With it, we have created
a WindowsImpersonationContext object. This object has a method, Undo(), that is able to revert the
impersonation after the resource has been used.

If we try to run our application with permissions as in Figure 8, we see that we are able to access
resource.txt even if the Application Pool is working in Integrated Mode.

Now we can resolve the security issue presented earlier. If we want to use Windows accounts to develop
a “role-based” application, we can use authentication to identify the user requesting resources and we
can use authorization, based on the user’s identity, to prevent access to resources not available for the
user’s role. If, for example, the resource we want to protect is a web page (like the admin page), we need
to set its ACL with the right ACEs, and use impersonation to force the Application Pool to use the
authenticated user’s permissions. However, as we have seen, when the Application Pool uses Integrated
mode, impersonation is available only at code level. So, although it’s easy in this situation to prevent
access to resources (like the resource.txt file) needed by a web page, it’s not so easy to prevent access
to a web page itself. For this, we need to use another IIS feature available in IIS Manager, .NET
Authorization Rules:
Figure 11: .NET Authorization Rules feature of IIS7 and IIS7.5.

.NET Authorization Rules is an authorization feature that works at ASP.NET level, not at IIS or file system
level (as for ACLs). So it permits us to ignore how IIS works and use Impersonation both in Integrated
Mode than in Classic Mode.

5.9 Windows Security Analysis


Windows NT provides a unified access control facility which applies to processes as well as
other objects in the system. The access control is built around two components, an access token
associated with every process and a security descriptor connected to every object where
interprocess access is possible. When a user logs onto the system, he1 must provide a username
and a password to authenticate himself to Windows NT. If he is accepted, a process and an
access token are created for him. The access token is associated with the process. An important
attribute in the access token is the Security ID (SID). The SID is an identifier that identifies the
user to the system in areas concerning security. A process inherits the access token of its creator.
There are two reasons for maintaining access tokens [23]:

– It keeps all the necessary security information in one place in order to speed up access
validation.

– It allows every process to modify its own security attributes, in a restricted manner, without
affecting other processes in the system.
Allowing the process to change its own security attributes may appear strange at first glance, but
a newly created process is normally started with all attributes disabled. Later, when a thread
wishes to perform an allowed privileged operation, the thread enables the necessary attributes
before the operation is executed.

1 Subjects and Objects

A subject is a process1 with one or more threads that execute on behalf of a user, and objects are all
other Windows NT resources that can be accessed and used by a subject. The privileges, the user SID
and the group SID of a user’s processes are stored in the access token held by the process. This token,
called the primary token, is used to grant threads different types of access to different objects. Threads
can also hold one additional token, called an impersonation token. This token, which allows the thread
to act on that subject’s behalf, is given to the thread by another subject. The information in the token is
compared with the information stored in the object’s Access Control List (ACL). The Security Reference
Monitor (SRM) performs the comparison when the object is first opened by the process.

2 Users and Groups

Only legitimate users may access objects on a Windows NT system. Such a user must have a valid user
account. In addition, in order to access a particular object, the user needs to have access rights to it.
Guest and Administrator are the default accounts in all Windows NT systems. Another supported
concept is groups. With groups, permissions can be granted to a set of related users, which facilitates
the procedure of granting rights and permissions. A user may belong to more than one group at the
same time. There are a number of predefined groups in Windows NT: Administrator, Backup Operators,
Printer Operators, Power Users, Users, and Guest. There are basically two types of accounts, global and
local . A global account is accessible and visible inside a whole domain, while local accounts are
accessible and visible only on the computer on which they were created. There are also two types of
groups, local and global. Local groups can only be used inside the domain, while global groups can be
used, and are visible, in all the trusted and trusting domains of the system.

3 Object Security

A security descriptor is associated with every named object in Windows NT. It holds information about
logging for and access to the object. The access control is implemented by an Access Control List (ACL).
The ACL consists of a list of Access Control Entries (ACEs). An ACE contains a Security IDentifier (SID) and
the operation that the owner of this SID is allowed (or disallowed) to perform on the object. ACEs that
disallow operations are generally placed before ACEs that allow operations. The ACL is tied to the object
when it is created in one of three different ways.

When a process wants access to an object it has to open the object. In the open call, the process must
specify what operations it intends to perform on the object. The system will then search the ACL of the
object to try to find an ACE that contains the SID of the caller (or of any group of which he is a member)
and allows or disallows the requested operations. The search is done on a “first match” basis. If the call
succeeds, a handle to the object is passed to the process. All subsequent calls to the object will be made
through this handle so no further checks are done as long as the process performs only the operations
specified in the open call. If new operations are required, the object must be re-opened.

4 Logon Security

The logon procedure in Windows NT is fairly complicated and involves both the executive and a number
of protected servers, including WinLogon, LSA, and SAM, to authenticate and grant privileges to a user.
The WinLogon server is the coordinator. A detailed description of the logon procedure is given in [18]. At
logon time, a user can choose to logon either locally or remotely. In the former case, the user is
authenticated against the local SAM database, and in the latter case against a SAM database stored on a
Domain Controller (DC). In remote logons, the netlogon service, which is started when the computer is
booted, is used on both sides. Passwords sent over the network are encrypted.

5 Account Security

As mentioned in Section 3.2, only legitimate users are allowed to access objects on a Windows NT
system, and each user belongs to one or more groups. User accounts, except Administrator and Guest,
are defined by users that have the permissions required to create new accounts, e.g., users in the
Administrator group.

5.1 The SAM database

All account information is centrally stored in the SAM database, which is accessible only by the SAM
subsystem. Passwords are stored as hashed values and, by default, two hashed variants of each
password are stored, in LAN Manager format and in NT native format.

5.2 User Rights

In addition to account information, e.g., username and password, a set of attributes called “rights” can
be specified. They define what a user (or group) can do in a system. The set of rights is given in Table 1.
Most of the rights are self-explanatory, for example a user account with the Log on locally rights
specified is allowed to logon to the system from the local machine. Windows NT has a standard policy
for assigning rights to certain groups.

For example, on a Windows NT workstation, everyone has the right to logon locally while, at a Windows
NT server, only Administrators and Operators are able to do that.

5.3 Account Policies

The account policies regulate password restrictions and account lockouts. These policies are first defined
when the account is created, but may later be changed. The following password options can be set: –
Maximal password age, sets limitations for the length of time a password can be used before it must be
changed. – Minimal password age, sets limitation for the length of time a password must be used before
a user is allowed to change it. – Minimal password length, sets a limit on the minimum number of
characters of which a password must at least consist. – Password Uniqueness, where a Windows NT
system can be configured to remember old passwords and force users to choose new passwords that
have not recently been used. An account can also be specified to be locked out after a given number of
unsuccessful logon attempts. The lockout feature offers a few options:

– Lockout after n bad logon attempts, where n must be assigned a positive integer value.

– Reset count after m minutes, where m specifies how many minutes shall pass before the bad logon
counter is cleared.

– Lockout duration, where forever (i.e., until an administrator unlocks) or duration in minutes are the
possible choices.

Note, however, that the default administrator account cannot be locked in any way.

6 TCP Security

A secure system must permit only services that are proved secure and necessary [4]. In Windows NT, it
is therefore possible to block communication to both TCP and UDP ports. This implies that a system can
be configured to accept only packets sent to specific ports on which secure servers listen. Microsoft calls
this feature TCP security.

3.7 Auditing

The Security Reference Monitor (SRM) and the Local Security Authority (LSA) are responsible for
auditing in Windows NT together with the Event Logger. Different types of events are grouped into
event categories, and auditing is then done on the basis of these groups. There are seven types of event
groups [18]: System, Logon/Logoff, Object Access, Privilege Use, Detailed Tracking, Policy Change and
Account Management.

The auditing is based on audit records constructed on request from the responsible subsystem by SRM
(in some cases by LSA). Requests from the executive are always carried out, while servers need the Audit
Privilege for SRM to honour their requests. The request must be sent for each occurrence of an event.
The audit record is then sent to LSA, which in turn sends it to the Event Logger, after field expansions
and/or field compressions. Finally, the Event Logger commits the audit record to persistent storage.

5.10 Windows Vulnerabilities


Totally 3078 vulnerabilities are found. Some of them are:

CVE-2023-44216

PVRIC (PowerVR Image Compression) on Imagination 2018 and later GPU devices offers software-transparent

compression that enables cross-origin pixel-stealing attacks against feTurbulence and feBlend in the SVG Filter
specification, aka a GPU.zip issue. For example, attackers can sometimes accurately determine text contained on a

web page from one origin if they control a resource from a different origin.

CVE-2023-36913

Microsoft Message Queuing Information Disclosure Vulnerability

CVE-2023-36912

Microsoft Message Queuing Denial of Service Vulnerability

CVE-2023-36911

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36910

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36909

Microsoft Message Queuing Denial of Service Vulnerability

CVE-2023-36908

Windows Hyper-V Information Disclosure Vulnerability

CVE-2023-36907

Windows Cryptographic Services Information Disclosure Vulnerability

CVE-2023-36906

Windows Cryptographic Services Information Disclosure Vulnerability

CVE-2023-36905

Windows Wireless Wide Area Network Service (WwanSvc) Information Disclosure Vulnerability
CVE-2023-36903

Windows System Assessment Tool Elevation of Privilege Vulnerability

CVE-2023-36900

Windows Common Log File System Driver Elevation of Privilege Vulnerability

CVE-2023-36697

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36606

Microsoft Message Queuing Denial of Service Vulnerability

CVE-2023-36593

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36592

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36591

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36590

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36589

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability

CVE-2023-36583

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability


CVE-2023-36582

Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability.

5.11 Address Space Layout Randomizations


Address space design randomization (ASLR) is a memory-insurance measure for working frameworks
(OSes) that gatekeepers use against buffer overflow attack by randomizing the area where framework
executables are stacked into memory.
The achievement of numerous cyberattacks, especially zero-day takes advantage of, depends on the
programmer's capacity to know or supposition the situation of cycles and capacities in memory. ASLR
can put address space focuses in eccentric areas. On the off chance that an assailant endeavors to take
advantage of a mistaken location space area, the objective application will crash, halting the assault and
alarming the framework.
ASLR was made by the Pax Project as a Linux fix in 2001 and was incorporated into the Windows
working framework starting with Vista in 2007. Prior to ASLR, the memory areas of documents and
applications were either known not set in stone.
Adding ASLR to Vista expanded the quantity of conceivable location space areas to 256, which means
aggressors just had a 1 out of 256 shot at tracking down the right area to execute code. Apple started
remembering ASLR for Mac OS X 10.5 Leopard, and Apple iOS and Google Android both began utilizing
ASLR in 2011.

What is Virtual Memory?


Virtual Memory is a memory the executives method with many advantages, yet it was fundamentally
made to make programming simpler. Envision you have Google Chrome, Microsoft Word, and a few
different projects open on a PC with 4 GB of RAM. Overall, the projects on this PC utilize significantly
more than 4 GB of RAM. In any case, not every one of the projects will be dynamic consistently or need
synchronous admittance to that RAM.
The working framework apportions pieces of memory to programs called pages. In case there isn't
sufficient RAM to store every one of the pages on the double, the pages to the least extent liable to be
required are put away on the more slow (yet more roomy) hard drive. At the point when the put away
pages are required, they'll switch spaces with less vital pages at present in RAM. This interaction is called
paging and loans its name to the pagefile.sys document on Windows.
Virtual memory makes it simpler for projects to deal with their memory, and furthermore makes them
safer. Projects don't have to stress over where different projects are putting away information, or the
amount RAM is left. They can simply ask the working framework for extra memory (or return unused
memory) as essential. All the program sees is a solitary adjacent lump of memory addresses for its elite
use, called virtual addresses. The program isn't permitted to take a gander at another program's memory.
At the point when a program needs to get to memory, it gives the working framework a virtual location.
The working framework contacts the CPU's memory the executives unit (MMU). The MMU deciphers
among virtual and actual addresses, returning that data to the working framework. Never does the
program straightforwardly interface with RAM.
Effect of ASLR on Virtual Memory
The choice to empower ASLR addresses a tradeoff between upgraded security and a decrease in the
measures of accessible 24-bit, 31-bit, and 64-bit private stockpiling. When empowered for 24-and 31-bit
virtual capacity, the size of accessible private stockpiling will be diminished by up to 63 pages and 255
pages, individually. A task's mentioned locale size should in any case be fulfilled from inside the
decreased private region for the task to be begun. Occupations whose locale size can't be fulfilled will
bring about an ABEND 822. In case a task's mentioned district size is fulfilled, it is as yet conceivable that
the decreased size of private stockpiling keeps the work from finishing, bringing about an ABEND 878.
One approach to decide if occupations would not have the option to run under the obliged size of 24-or
31-bit private stockpiling that would happen with ASLR empowered is to indicate a bigger incentive for the
CSA boundary in parmlib. Expanding the spans of both 24-and 31-bit CSA by 1M successfully diminishes
the extents of 24-and 31-bit private stockpiling by 1M, which is more prominent than the most extreme
decrease that would happen under ASLR.

The Main Disadvantages of ASLR


 Randomization dependent on load times

With Address Space Layout Randomization, the base locations of DLLs depend on boot-time
randomization. Basically, this implies that the base locations of libraries will be randomized at the
following reboot. This is an Achilles heel that can be taken advantage of by assailants, essentially by
joining weaknesses like memory divulgence or animal power assaults.

 Unsupported executables/libraries, low entropy

ASLR isn't upheld when the executable or DLLs are not worked with ASLR support. In spite of the fact
that Windows 8 and Windows 10 attempt to defeat this limit (e.g., power ASLR in Windows 8), there are
still exemptions that multiple occasions render the ASLR assurance ineffectual. More established
renditions of Windows and inheritance programs are especially inclined to this restriction. Also, ASLR on
32-bit frameworks experiences low entropy, making it defenseless against animal power and comparative
assaults.

 Try not to get an assault

Address Space Layout Randomization expects to keep an assault from dependably arriving at its
objective memory address. ASLR doesn't zero in on catching the assault, rather on making the assault
improbable to work. Once the shellcode leaps to some unacceptable location during the endeavor
(because of the memory randomization), the program conduct is vague. The cycle may get an exemption,
crash, stall out, or just proceed with conflicting conduct subsequently.

 ASLR doesn't caution if there should be an occurrence of assault

ASLR doesn't give any alarms about assault endeavors. At the point when a weakness is taken
advantage of and fizzles (because of ASLR's memory randomization), no ready or assault sign is gotten.
Basically ASLR doesn't 'know' when an assault occurred.

 ASLR doesn't give data about the assault

Scientific data about an assault, abuse, and shellcode is pivotal for any genuine criminological
examination. Taken advantage of cycles, memory dumps, and call stacks can be utilized to recognize,
unique mark, and label takes advantage of. ASLR can't give this data since it couldn't say whether an
assault occurs or so, all things considered it was halted.

 ASLR sidesteps takes advantage of consistently

Since Address Space Layout Randomization was presented in Windows OS, it has been avoided on
many occasions by genuine endeavors and assaults. Assailants consistently foster new procedures to
overcome ASLR safeguard. Sidestep procedures incorporate utilizing ROP chain in non-ASLR modules
(e.g., CVE 2013-1347), JIT/NOP splashing (e.g., CVE-2013-3346), just as memory divulgence
weaknesses and different strategies (e.g., CVE-2015-1685, CVE-2015-2449, CVE-2013-2556, CVE-
2013-0640, CVE-2013-0634).
In 2016, scientists from SUNY Binghamton and the University of California, Riverside, introduced a paper
brought Jump Over ASLR: Attacking Branch Predictors to Bypass ASLR. The paper subtleties an
approach to assault the Branch Target Buffer (BTB). The BTB is important for the processor that rates up
if proclamations by foreseeing the result. Utilizing the creators' technique, it is feasible to decide the areas
of known branch directions in a running project. The assault being referred to was performed on a Linux
machine with an Intel Haswell processor (first delivered in 2013), yet could almost certainly be applied to
any cutting edge working framework and processor.
All things considered, you shouldn't really surrender. The paper offered a couple of ways that equipment
and working framework engineers can alleviate this danger. More current, fine-grain ASLR methods
would require more exertion from the aggressor, and expanding the measure of entropy (haphazardness)
can take the Leap Over assault infeasible. In all probability, more up to date working frameworks and
processors will be insusceptible to this assault.

 Barring singular location spaces from ASLR

At the point when ASLR is empowered, you can utilize SAF approval to exclude chosen address spaces
from ASLR. To do this, grant SAF READ power to the IARRSM.EXEMPT.ASLR.jobname asset in the
FACILITY class to completely absolve the work or to the IARRSM.EXEMPT.ASLR24.jobname asset to
exclude the work from just 24-bit ASLR. The accompanying model shows the arrangement of orders to
completely absolve a task from ASLR:
Reclassify FACILITY IARRSM.EXEMPT.ASLR.jobname UACC(READ)
SETROPTS CLASSACT(FACILITY)
SETROPTS RACLIST(FACILITY) REFRESH
Certain framework address spaces, like MASTER, which instate ahead of schedule during the IPL
interaction are not randomized. High virtual stockpiling may not be randomized in address spaces with
instatement leaves that get high virtual stockpiling. Occupation steps that acquire high virtual stockpiling
and allot it to an undertaking not inside the program task tree of that work step limit the capacity of the
framework to set up randomization for the following position step if the got capacity continues across work
steps.
Endeavoring to sidestep ASLR
Regardless of its benefits, endeavors to sidestep ASLR are normal and appear to fall into a few classes:

 Utilizing address spills.


 Accessing information comparative with specific locations.
 Taking advantage of execution shortcomings that permit aggressors to figure tends to when
entropy is low or when the ASLR execution is broken.
 Utilizing side channels of equipment activity.
Address Space Layout Randomization (ASLR) can help defeat certain types of buffer overflow attacks.
ASLR can locate the base, libraries, heap, and stack at random positions in a process's address space,
which makes it difficult for an attacking program to predict the memory address of the next instruction.
ASLR is built into the Linux kernel and is controlled by the
parameter /proc/sys/kernel/randomize_va_space. The randomize_va_space parameter can take the
following values:

Disable ASLR. This setting is applied if the kernel is booted with the norandmaps boot parameter.

Randomize the positions of the stack, virtual dynamic shared object (VDSO) page, and shared
memory regions. The base address of the data segment is located immediately after the end of
the executable code segment.

Randomize the positions of the stack, VDSO page, shared memory regions, and the data
segment. This is the default setting.

You can change the setting temporarily by writing a new value to /proc/sys/kernel/randomize_va_space,
for example:

# echo value > /proc/sys/kernel/randomize_va_space

To change the value permanently, add the setting to /etc/sysctl.conf, for example:

kernel.randomize_va_space = value

and run the sysctl -p command.

If you change the value of randomize_va_space, you should test your application stack to ensure that it is
compatible with the new setting.

If necessary, you can disable ASLR for a specific program and its child processes by using the following
command:

% setarch `uname -m` -R program [args ...]

5.12 Retrofitting Security into a Commercial Operating System


To retrofit a commercial operating system into a secure operating system, the resultant operating
system must be modified to implement a secure operating system that implements the reference
monitor concept, see Definitions 2.5 and 2.6. The reference monitor concept requires guarantees in
complete mediation, tamperproofing, and verifiability. There are challenges in each of these areas.
Complete mediation requires that all the security-sensitive operations in the operating system be
identified, so they can be authorized. Identifying security-sensitive operations in a complex, production
system is a nontrivial process. Such systems have a large number of security-sensitive operations
covering a variety of object types, and many are not clearly identified. As we will see, a significant
number of security-sensitive operations are embedded deep inside the kernel code. For example, in
order to authorize an open system calls, several authorizations may be necessary for directories, links,
and finally the target file (i.e., inode) itself. In addition to files, there are many such objects in modern
operating systems,including various types of sockets, shared memory, semaphores, interprocess
communication, etc. The identification of covert channels (see Chapter 5) is even more complex, so it is
typically not part of retrofitting process for commercial operating systems. As a result, complete
mediation of all channels is not ensured in the retrofitted operating systems we detail.

Tamperproofing the reference monitor would seem to be the easiest task in retrofitting an existing
system, but this also has proven to be difficult. The obvious approach is to include the reference monitor
itself in the kernel, so that it can enjoy the same tamper-protection that the kernel has (e.g., runs in ring
0).
There are two issues that make guaranteeing tamper-protection difficult. First, commercial operating
systems often provide a variety of ways to update the kernel. Consider that UNIX kernels have a device
file that can be used to access physical memory directly /dev/kmem. Thus, processes running outside of
the kernel may be able to tamper with the kernel memory, even though they run in a less-privileged
ring.Modern kernels include a variety of other interfaces to read and write kernel memory, such as
/proc, Sysfs file systems, and netlink sockets. Of course, such interfaces are only accessible to root
processes, but there are many processes in a UNIX system that run as root. Should any one get
compromised, then the kernel may be tampered. In effect, every root process must be part of a UNIX
system’s trusted computing base to ensure tamper-protection.

But the biggest challenge for retrofitting an operating system is providing verification that the resultant
reference monitor implementation enforces the required security goals.We must verify that mediation
is implemented correctly, that the policy enforces the expected security goal, that the reference monitor
implementation is correct, and that the rest of the trusted computing base will behave correctly.
Verifying that the mediation is done correctly aims to address the problems discussed above. Typically,
the mediation interface is designed manually. While tools have been developed that find bugs in
mediation interfaces [149, 351], proving the correctness of a reference monitor interface in an operating
system is intractable in general because they are written in nontype safe languages, such as C and
various assembly languages.
Policy verification can also be complex as there are a large number of distinct authorization queries in a
commercial operating system, and there are a large number of distinct processes. Some retrofitted
commercial operating systems use a multilevel security (MLS) model, such as BellLaPadula [23], but
many use access matrix mandatory access control (MAC) models, such as Type Enforcement [33]. The
latter models are more flexible, but they also result in more complex policies. A Bell-LaPadula policy is
fixed size, but an access matrix policy tends to grow with the number of distinct system programs. Such
models present a difficult challenge in verifying that each system is enforcing the desired security goals.

Finally, the implementation of a commercial operating system and the remaining trusted computing
base is too complex to verify whether the overall system protects the reference monitor. Commercial
operating systems are large, there are often several developers of the trusted computingbase software,
and the approaches used to build the software are not documented. The best that we can hope for is
that some model of the software can be constructed after the fact. The verification of Scomp’s
correctness required an evaluation that the design model enforced system security goals and that the
source correctly implemented the design. Many believe that it is not possible to build a sufficiently
precise design of a commercial system and a mapping between this design and the system’s source code
necessary to enable such verification. Clearly, current technologies would not support such a
verification.

5.13 Introduction to Security Kernels


While the Multics project was winding down in the mid-1970s, a number of vendors and researchers
gained confidence that a secure operating system could be constructed and that there was a market for
such an operating system, within the US government anyway.Many of the leaders of these operating
system projects were former members of the Multics team, but they now led other research groups or
development groups. Even Honeywell, the owner of the Multics system, was looking for other ways to
leverage the knowledge that it gained through the Multics experience.
While the Multics security mechanisms far exceeded those of the commercial operating systems of the
day, it had become a complex system and some of the decisions that went into its design needed to be
revisited. Multics was designed to be a general-purpose operating system that enforced security goals,
but it was becoming increasingly clear that balancing generality, security, and performance was a very
difficult challenge, particularly given the performance of hardware in the mid-70s. As a result, two
directions emerged, one that focused on generality and performance with limited security mechanisms
(e.g., UNIX) and another that emphasized verifiable security with reasonable performance for limited
application suites (i.e., the security kernel). In the former case, popular, but insecure, systems (see
Chapter 4) were built and a variety of efforts have been subsequently made to retrofit a secure
infrastructure for such systems (see Chapters 7 through 9). In this chapter we examine the latter
approach.
In the late 1970s and early 1980s, there were several projects that aimed to build a secure operating
system from scratch, addressing security limitations of the Multics system.These included the Secure
Communications Processor (Scomp) [99] from Honeywell, the Gemini Secure Operating System (GSOS or
GEMSOS) [290] from Gemini, the Secure Ada Target (SAT) [34, 125, 124] and subsequent LOCK systems
[293, 273, 274, 292, 276] from Honeywell and Secure Computing, respectively, which are based on the
Provably Secure Operating System (PSOS) design [92, 226], the Kernelized Secure Operating System
(KSOS) [198] from Ford Aerospace and Communications, the Boeing Secure LAN [298], and several
custom guard systems (mostly proprietary, unpublished systems). In this chapter, we examine two of
these systems, Scomp and GEMSOS, to demonstrate the design and implementation decisions behind
the development of security kernels. These two systems represent two different implementation
platforms for building a security kernel: Scomp uses custom hardware designed for security
enforcement, whereas GEMSOS was limited to existing, commercially-popular hardware (i.e., the Intel
x86 platform). These systems show what can be done when even the hardware is optimized for security
(Scomp) and the limitations imposed on the design when available hardware is used (GEMSOS). Recent
advances in commercial hardware, such as I/O MMUs, may enable us to revisit some Scomp design
decisions.
THE SECURITY KERNEL
The major technical insight that emerged at this time was that a secure operating system needed a
small, verifiably correct foundation upon which the security of the system can be derived. This
foundation was called a security kernel [108]. A security kernel is defined as the hardware and software
necessary to realize the reference monitor abstraction [10]. A security kernel design includes hardware
mechanisms leveraged by a minimal, software trusted computing base (TCB) to achieve the reference
monitor concept guarantees of tamperproofing, complete mediation, and verifiability (see Definition
2.6).
The first security kernel was prototyped by MITRE in 1974. It directly managed the system’s physical
resources with less than 20 subroutines in less than 1000 source lines of code. In addition to identifying
what is necessary to build a security kernel that implements a reference monitor, this experience and
the Multics experience indicated how a security kernel should be built. While mediation and
tamperproofing are fundamental to the design of a security kernel, in building a security kernel the
focus became verification. Three core principles emerged [10]. First, a security kernel has to implement
a specific security policy, as it can only be verified as being secure with respect to some specific security
goals. A security goal policy (e.g., based on information flow, see Chapter 5) must be defined in a
mandatory protection system (see Definition 2.4) to enable verification. Second, the design of the
security kernel must define a verifiable protection behavior of the system as a whole. That is, the system
mechanisms must be comprehensively assessed to verify that they implement the desired security
goals. This must be in the context of the security kernel’s specified security policy. Third, the
implementation of the kernel must be shown to be faithful to the security model’s design. While a
mathematical formalism may describe the design of the security kernel and enable its formal
verification, the implementation of the security kernel in source code must not invalidate the principles
established in the design.
Thus, the design and implementation of security kernels focused on the design of hardware, a minimal
kernel, and supporting trusted services that could be verified to implement a specific security policy,
multilevel security. While Multics had been designed to implement security on a particular hardware
platform, the design of security kernels included the design of hardware that would enable efficient
mediation of all accesses. The design of security kernel operating systems leverages this hardware to
provide a small number of mechanism necessary to enforce multilevel security. Finally, some trusted
services are identified, such as file systems and process management, that are necessary to build a
functional system.
The primary goal of most security kernel efforts became verification that the source level
implementation satisfies the reference monitor concept. This motivated the exploration of formal and
semi-formal methods for verifying that a design implemented the intended security goals and for
verifying that a resultant source code implementation satisfied the verified design. As Turing showed
that no general algorithm can show that any program satisfies a particular property (e.g., halts or is
secure), such security verification must be customized to the individual systems and designs. The work
in security kernel verification motivated the subsequent methodologies for system securityassurance
(see Chapter 12. The optimistic hope that formal tools would be developed that could automatically
support formal assurance has not been fulfilled, but nonetheless assurance is still the most practical
means known to ensure that a system implements a security goal.
Verification that an implementation correctly enforces a system’s security goals goes far beyond
verifying the authorization mechanisms are implemented correctly.The system implementation must be
verified to ensure that all system resource mechanisms (see Chapter 1) are not vulnerable to attack. As
computing hardware is complex, assurance of correct use of hardware for implementing system
resources is nontrivial. Consider the memory system. A hardware component called the Translation
Lookaside Buffer (TLB) holds a cache of mappings from virtual memory pages to their physical memory
counterparts. If an attacker can modify these mappings they may be able to compromise the secrecy
and integrity of system and user application data. Depending on the system architecture, TLBs may be
filled by hardware or software. For hardware-filled TLBs, the system implementation must ensure that
the page table entries used to fill the TLB cannot be modified by attackers. For software-filled TLBs, the
refill code and data used by the code must be isolated from any attacker behavior. Further, other
attacks may be possible if an attacker can gain access to secret memory after it is released. For example,
heap allocation mechanisms must be verified to ensure clearing of all secret memory (e.g., to prevent
object reuse). Even across reboots, secret data may be leaked as BIOS systems are inconsistent about
whether they clear memory on boot or not, and data remains in memory for sometime after shutdown.
As a result of these and other possible attack vectors (e.g., covert channels, see Chapter 5), careful
verification of system implementations is necessary to ensure reference monitor guarantees, but it is a
complex task.
In this chapter, we examine two of systems whose designs aimed for the most comprehensively assured
security, Honeywell’s Scomp [99] and Gemini’s GEMSOS [290]. Both these systems achieved the highest
assurance rating ever achieved for an operating system, A1 as defined by the Orange Book [304]
assurance methodology 1. Scomp was used as the basis for the design of the assurance criteria. GEMSOS
is still available today [5].

You might also like