Sg 248568
Sg 248568
Tim Simon
Felipe Bessa
Hugo Blanco
Carlo Castillo
Rohit Chauhan
Kevin Gee
Gayathri Gopalakrishnan
Samvedna Jha
Andrey Klyachkin
Andrea Longo
Ahmed Mashhour
Amela Peku
Prashant Sharma
Vivek Shukla
Dhanu Vasandani
Henry Vo
IBM Power
Redbooks
IBM Redbooks
February 2024
SG24-8568-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Contents v
6.2.4 Misconfiguration and human errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.3 Linux on Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.3.1 Linux distributions on Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.4 Hardening Linux systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.4.1 Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.4.2 Network security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.4.3 User policies and access controls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.4.4 Logging, audits, and file integrity monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.4.5 File system security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.4.6 SIEM and endpoint detection and response integration . . . . . . . . . . . . . . . . . . . 187
6.4.7 Malware protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
6.4.8 Backup strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.4.9 Consistent update strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.4.10 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.5 Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.6 Developing an incident response plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Chapter 11. Lessons learned and future directions in IBM Power security . . . . . . . 249
11.1 Lessons that were learned from real-world breaches . . . . . . . . . . . . . . . . . . . . . . . . 250
11.1.1 Recommendations to reduce data breach costs. . . . . . . . . . . . . . . . . . . . . . . . 250
11.1.2 Summary of IBM X-Force Threat Intelligence Index 2024 . . . . . . . . . . . . . . . . 250
11.1.3 Best practices for data breach prevention. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
11.1.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
11.2 Basic AIX security strategies and best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
11.2.1 Usernames and passwords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
11.2.2 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
11.2.3 Insecure daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
11.2.4 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
11.2.5 Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
11.2.6 Server firmware and I/O firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
11.2.7 Active Directory and LDAP integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
11.2.8 Enhanced access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
11.2.9 Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
11.2.10 A multi-silo approach to security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
11.2.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
11.3 Fix Level Recommendation Tool for IBM Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
11.4 Physical security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
11.4.1 Key physical security measures: a layered approach . . . . . . . . . . . . . . . . . . . . 257
11.4.2 Perimeter security and beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Contents vii
Anypoint Flex Gateway (Salesforce/Mulesoft) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Active IBM i security ecosystem companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Instana™ QRadar®
Db2® IBM Security® Redbooks®
DS8000® IBM Z® Redbooks (logo) ®
FlashCopy® Instana® Satellite™
GDPS® POWER® SystemMirror®
Guardium® Power Architecture® Tivoli®
HyperSwap® Power8® WebSphere®
IBM® Power9® X-Force®
IBM Automation® PowerHA® z/OS®
IBM Cloud® PowerPC® z/VM®
IBM FlashSystem® PowerVM®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Red Hat, Ansible, Ceph, Fedora, JBoss, OpenShift, are trademarks or registered trademarks of Red Hat, Inc.
or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
A multi-layered security architecture is essential for protection. Key areas to focus on include
the following items:
Hardware-level security: Prevent physical tampering and help ensure data integrity.
Virtualization security: Isolate environments and control resource access.
Management tool security: Secure hardware and cloud resources.
Operating system security: Continuously update for robust security.
Storage security: Protect data at rest and in transit.
Networking security: Prevent unauthorized access and data breaches.
This IBM Redbooks® publication describes how the IBM Power ecosystem provides
advanced security capabilities at each of these layers. IBM Power servers are designed with
security as a core consideration.
At the hardware level, advanced technology includes tamper-resistant features that are built
in to the processor to prevent unauthorized access and modifications, secure cryptographic
engines to provide strong encryption of data, and Trusted Boot to help ensure that only
authorized software components are loaded during system startup.
At the virtualization level, the hypervisor, which manages virtual machines (VMs), is designed
to be secure and resistant to attacks. The hypervisor isolates workloads within a single
physical server, which enables secure resource sharing within your infrastructure. The
Hardware Management Console (HMC) provides centralized management and control of
Power servers in a secure manner.
The operating systems that run on IBM Power servers (IBM AIX®, IBM i, and Linux on Power)
offer robust security features, which include user authentication, access controls, and
encryption support. Also, tools such as IBM PowerSC provide a comprehensive security and
compliance solution that helps manage security policies, monitor threats, and enforce
compliance.
Security also requires solid management and control. This book describes best practices
such as conducting regular security audits, keeping operating systems and applications up to
date with the latest security fixes, and implementing strong user authentication and
authorization policies. Other critical elements include the implementation of data encryption
for both data at rest and in transit, and strong network security processes that use firewalls,
intrusion detection systems (IDS), and other security measures.
By combining these hardware, software, and management practices, IBM Power provides a
robust foundation for security in your IT environment.
Tim Simon is an IBM Redbooks Project Leader in Tulsa, Oklahoma, US. He has over
40 years of experience with IBM®, primarily in a technical sales role working with customers to
help them create IBM solutions to solve their business problems. He holds a BS degree in Math
from Towson University in Maryland. He has extensive experience creating customer solutions
by using IBM Power, IBM Storage, and IBM Z® throughout his career.
Felipe Bessa is an IBM Brand Technical Specialist and Partner Technical Advocate for
IBM Power. He works for IBM Technology in Brazil and has over 25 years of experience in the
areas of research, planning, implementation, and administration of IT infrastructure solutions.
Before joining IBM, he was recognized as a Reference Client for IBM Power Technologies for
SAP and SAP HANA, IBM PowerVC, IBM PowerSC, Monitoring and Security, IBM Storage, and
the Run SAP Like a Factory (SAP Solution Manager) Methodology. He was chosen as an
IBM Champion for IBM Power for 2018 - 2021.
Hugo Blanco is an IBM Champion who is based in Madrid. He has been working with Power
servers since 2008. He began his career as an instructor and has since taken on various roles
at SIXE, which is an IBM Business Parter, where he gained extensive experience across
different roles and functions. Hugo is deeply passionate about AIX, Linux on Power, and various
cybersecurity solutions. He has contributed to the development of several IBM certification
exams and actively participates in Common Iberia, Common Europe, and TechXchange. He
enjoys delivering technical talks on emerging technologies and real-world use cases.
Carlo Castillo is a Client Services Manager for Right Computer Systems (RCS), an
IBM Business Partner, and Red Hat partner in the Philippines. He has over 30 years of
experience in pre-sales and post-sales support; designing full IBM infrastructure solutions;
creating pre-sales configurations; performing IBM Power installation, implementation, and
integration services; providing post-sales services and technical support for customers, and
conducting presentations at customer engagements and corporate events. He was the first
IBM-certified IBM AIX Technical Support engineer in the Philippines in 1999. As training
coordinator during RCS' tenure as an IBM Authorized Training Provider 2007 - 2014, he
administered the IBM Power curriculum, and conducted IBM training classes covering AIX,
PureSystems, IBM PowerVM®, and IBM i. He holds a degree in Computer Data Processing
Management from the Polytechnic University of the Philippines.
Rohit Chauhan is a Senior Technical Specialist with expertise in IBM i architecture. He works at
Tietoevry Tech Services, Stavanger, Norway, which is an IBM Business Partner and one of the
biggest IT service providers in the Nordics. He has over 12 years of experience working on the
IBM Power platform with design, planning, and implementation of IBM i infrastructure, which
includes high availability and disaster recovery (HADR) solutions for many customers during this
tenure. Before his current role, Rohit worked for clients in Singapore and the UAE in the technical
leadership and security role for the IBM Power domain. He possesses rich corporate experience in
designing solutions, implementations, and system administration. He is a member of Common
Europe Norway with strong focus on the IBM i platform and security. He is recognized as an
IBM Advocate, Influencer, and Contributor for 2024 through the IBM Rising Champions Advocacy
Badge program. He holds a bachelor’s degree in Information Technology. He is an IBM certified
technical expert and also holds an ITIL CDS certificate. His areas of expertise include IBM i,
IBM HMC, security enhancements, IBM PowerHA®, systems performance analysis and tuning,
Backup Recovery and Media Services (BRMS), external storage, PowerVM, and solutions to
customers for the IBM i platform.
Gayathri Gopalakrishnan works for IBM India and has over 22 years of experience as a
technical solution and IT architect. She works primarily in consulting. She is a results-driven
IT Architect with extensive working experience in spearheading the management, design,
development, implementation, and testing of solutions.
Samvedna Jha is a Senior Technical Staff Member in the IBM Power organization,
Bengaluru, India. She holds a masters degree in Computer Application and has more than
20 years of work experience. In her role as a Security Architect, IBM Power, she has a
worldwide technical responsibility to handle the security and compliance requirements of
Power products. Samvedna is a recognized speaker in conferences, has authored blogs, and
published disclosures. She is also the security focal point for the Power products secure
release process.
Andrea Longo is a Partner Technical Specialist for IBM Power in Amsterdam, the
Netherlands. He has a background in computational biology research and holds a degree in
Science and Business Management from Utrecht University. He is an IBM Quantum
Ambassador whose duties are to prepare academia and industry leaders to be quantum-safe
and to experiment with the immense possibilities of the technology.
Ahmed Mashhour is an IBM Power Technology Services Consultant Lead at IBM Saudi
Arabia. He is an IBM L2 certified Expert. He holds IBM AIX, Linux, and IBM Tivoli®
certifications. He has 19 years of professional experience in IBM AIX and Linux systems. He
is an IBM AIX back-end SME who supports several customers in the US, Europe, and the
Middle East. His core expertise is in IBM AIX, Linux systems, clustering management,
IBM AIX security, virtualization tools, and various IBM Tivoli and database products. He has
authored several publications inside and outside IBM, including co-authoring other
IBM Redbooks publications. He has hosted IBM AIX, Security, PowerVM, IBM PowerHA,
PowerVC, Power Virtual Server, and IBM Storage Scale classes worldwide.
Preface xiii
Amela Peku is a Partner Technical Specialist with broad experience in leading technology
companies. She holds an MS in Telecommunication Engineering and is part of the IBM Power
team. She works with IBM Business Partners and customers to showcase the value of
IBM Power solutions. She provided technical support for next-generation firewalls, Webex,
and Webex Teams, focusing on performance and networking, and handled escalations,
working closely with engineering teams. She is certified in Networking, Security, and IT
Management.
Prashant Sharma is the IBM Power Technical Product Leader for the Asia Pacific region. He
is based in Singapore. He holds a degree in Information Technology from the University of
Teesside, England, and a MBA from the University of Western Australia. With extensive
experience in IT infrastructure enterprise solutions, he specializes in pre-sales activities;
client and partner consultations; technical enablement; and the implementation of IBM Power
servers, IBM i, and IBM Storage. He drives technical strategy and product leadership for
IBM Power to help ensure the delivery of innovative solutions to diverse markets.
Vivek Shukla is a Technical Sales Specialist for IBM Power, Hybrid Cloud, artificial
intelligence (AI), and Cognitive Offerings in Qatar working for GBM. He has experience in
sales, application modernization, digital transformation, infrastructure sizing, cybersecurity
and consulting, and SAP HANA, Oracle, and core banking. He is an IBM Certified L2
(Webexpert) Brand Technical Specialist. He has over 22 years of IT experience in technical
sales, infrastructure consulting, IBM Power servers, and AIX, IBM i, and IBM Storage
implementations. He has hands-on experience on IBM Power servers, AIX, PowerVM,
PowerHA, PowerSC, Requests for Proposals, Statements of Work, sizing, performance
tuning, root cause analysis, disaster recovery (DR), and mitigation planning. In addition to
writing multiple IBM Power FAQs, he is also an IBM Redbooks author. He is a presenter,
mentor, and profession champion that is accredited by IBM. He graduated with a bachelor's
degree (BTech) in electronics and telecommunication engineering from IETE, New Delhi, and
a master's degree (MBA) in information technology from IASE University. His areas of
expertise include Red Hat OpenShift, IBM Cloud Paks, Power Enterprise Pools, and Hybrid
Cloud.
Dhanu Vasandani is a Staff Software Test Engineer with over 13 years of experience,
specializing in AIX Operating System Security Testing at IBM Power in Bangalore, India. She
holds a Bachelor of Technology degree in Computer Science and is instrumental in testing
multiple AIX releases across various Power server models. In her current role, Dhanu serves
as the Component Lead for the AIX Operating System Security Guild, overseeing various
subcomponents. She is responsible for conducting comprehensive system testing for pre-GA
and post-GA phases of multiple AIX releases across different Power server models. Dhanu is
known for her expertise in areas such as encryption, Trustchk, audit, role-based access
control (RBAC), and other security aspects, contributing to IBM Lighthouse Community and
IBM Docs. She is recognized for her proficiency in identifying and addressing high-impact AIX
defects within the ISST System organization to help ensure the delivery of top-quality
products to customers.
Henry Vo is an IBM Redbooks Project Leader with 10 years of experience at IBM. He has
technical expertise in business problem solving, risk/root-cause analysis, and writing technical
plans for business. He has held multiple roles at IBM that include project management,
ST/FT/ETE Testing, back-end developer, and a DOL agent for NY. He is a certified IBM z/OS®
Mainframe Practitioner, which includes IBM Z System programming, agile, and
Telecommunication Development Jumpstart. Henry holds a Master of Management Information
System degree from the University of Texas at Dallas.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xv
xvi IBM Power Security Catalog
1
The chapter delves into the concept of cyber resilience by focusing on the zero trust security
model, which mandates continuous verification of users, devices, and network components in
an environment with both internal and external threats. It also provides an in-depth look at the
IBM security approach by showcasing how its advanced technologies and methodologies are
designed to defend against various threats.
In summary, this chapter offers a comprehensive analysis of the security and cybersecurity
challenges that are faced by organizations by presenting detailed insights into strategies and
technologies to mitigate these threats and enhance overall security resilience, with a
particular emphasis on the IBM response to these challenges.
At the hardware level, built-in protections that prevent physical tampering and help ensure
data integrity, are needed. Virtualization technologies must enhance security by isolating
environments and controlling resource access. The security of the hypervisor, a critical
component in virtualized environments, is paramount in preventing attacks that might
compromise multiple virtual machines (VMs). In the IBM Power environment, logical
partitioning provides strong isolation between different workloads on the same physical
hardware, which enhances security.
Figure 1-1 shows how IBM Power10 works to provide protection at every layer.
Management tools like the Hardware Management Console (HMC) and Cloud Management
Console (CMC) play a vital role in securing hardware and cloud resources. Operating
systems must continuously provide better security features because they are often vectors of
attack, so their contribution is critical to the overall security posture of a system.
Storage security involves protecting data at rest and in transit by using techniques such as
encryption and access controls. Methods for creating secure, resilient copies of data, which
are known as safeguarded copies, and data resiliency are needed to protect against data
corruption or loss. Finally, networking security is integral to overall security with a focus on
secure network design, monitoring, and protection mechanisms to prevent unauthorized
access and data breaches.
1
Source:
https://hc32.hotchips.org/assets/program/conference/day1/HotChips2020_Server_Processors_IBM_Starke_P
OWER10_v33.pdf
At the heart of the IBM approach is the integration of security throughout its systems, which
builds trust and resilience from the ground up. This approach includes safeguarding firmware
integrity with Secure Boot processes and bolstering data protection through hardware-based
encryption acceleration.
IBM goes beyond basic protection with a proactive cybersecurity strategy. It offers secure
storage solutions and advanced threat prevention and detection mechanisms. In an incident,
IBM provides rapid response and recovery options to minimize downtime and effectively
manage operational risks.
Privacy and confidentiality are paramount, which are supported by IBM advanced encryption
technologies. These technologies include pervasive encryption throughout the data lifecycle
and quantum-safe cryptography, which is designed to guard against emerging threats such as
quantum computing.
IBM simplifies regulatory compliance with continuous compliance and audit capabilities.
Automated monitoring and enforcement tools help ensure adherence to industry standards,
and unified security management tools facilitate consistent governance across diverse IT
environments.
Collaborating closely with ecosystem partners, IBM integrates security across hybrid cloud
environments, networks, software systems, architectures, and chip designs. This
comprehensive approach helps ensure holistic protection and resilience across all facets of
an IT infrastructure.
In summary, IBM Infrastructure sets a high standard for security excellence by embedding
advanced features into its solutions and equipping businesses to address both current and
future cybersecurity challenges with confidence. Through collaborative efforts with ecosystem
partners and a focus on regulatory compliance, IBM delivers secure, resilient, and compliant
infrastructure solutions, empowering businesses to thrive in the digital age amid evolving
cyberthreats.
The necessity for digital transformation spans businesses of all sizes, from small enterprises
to large corporations. This message is conveyed clearly through virtually every keynote, panel
discussion, article, or study that are related to how businesses can remain competitive and
relevant as the world becomes increasingly digital. However, there are many considerations,
with security being one of the most important. Ensuring that the outcome of digital
transformation is more secure than before, and that the transition process is handled
securely, is crucial.
In the era of digital transformation, many organizations report experiencing at least one data
breach due to the digital transformation process. In addition to data breaches, there are other
concerns organizations must address, such as ensuring a secure expansion beyond their
data centers, secure cloud adoption, and mitigating cyberattacks and ransomware.
Data protection and privacy are paramount as data flows between data centers, cloud
services, and edge devices. Ensuring that data is encrypted in transit and at rest across
various platforms is essential. Proper encryption and access controls safeguard stored data,
and compliance with data sovereignty regulations helps ensure that data is processed and
stored according to regional laws. Businesses must implement end-to-end encryption,
enforce access controls, and stay updated on data protection regulations.
Access control and identity management become more complex in hybrid environments.
Consistent identity and access management (IAM) across on-premises and cloud
environments is crucial. Managing and monitoring privileged access helps prevent
unauthorized access and insider threats. Strong authentication methods, such as multi-factor
authentication (MFA), enhance security by adding extra layers of protection. Implementing
robust IAM solutions, continuously monitoring access, and ensuring the use of MFA are key
steps.
Visibility and monitoring across hybrid and multi-cloud environments are critical for detecting
anomalies and threats. Achieving comprehensive visibility involves implementing unified
monitoring solutions that provide a holistic view of the entire infrastructure. Consistent logging
and auditing mechanisms are necessary to track activities and support incident response.
Network monitoring helps detect and respond to threats in real time. Organizations should
invest in integrated monitoring tools, establish thorough logging practices, and deploy
real-time network monitoring systems.
Network security involves implementing network segmentation to limit the spread of threats.
Ensuring secure connections between data centers, cloud environments, and edge locations
is essential. Deploying and managing firewalls and intrusion detection/prevention systems in
a coordinated manner strengthens network security. Businesses should design segmented
network architectures, secure connectivity channels, and maintain robust firewalls, and
intrusion detection systems (IDS) and intrusion prevention systems (IPS).
Expanding operations beyond the traditional data center offers numerous benefits, but it also
introduces many security challenges that organizations must proactively address. By
prioritizing comprehensive data protection, robust access control, enhanced visibility,
regulatory compliance, advanced threat management, consistent configuration, and strong
network security, businesses can mitigate these risks and fully use the advantages of hybrid
and multi-cloud environments. Security in these complex infrastructures is an ongoing
process that requires vigilance, adaptability, and a commitment to staying ahead of emerging
threats.
One of the most critical security challenges in cloud adoption is the potential for data
breaches and data loss. Sensitive information that is stored in the cloud can be an attractive
target for cybercriminals. Unauthorized access can lead to the exposure of confidential data,
resulting in financial losses, reputational damage, and legal repercussions. To mitigate these
risks, businesses should implement end-to-end encryption for data at rest and in transit,
enforce strict access control policies, and conduct regular security audits and vulnerability
assessments.
Insider threats, whether from malicious intent or inadvertent actions, pose a significant risk.
Employees, contractors, or third-party vendors with access to cloud systems can potentially
misuse their access, leading to data leaks or disruptions. To counter these threats,
businesses should implement regular security training programs, use monitoring and
anomaly detection systems, and apply the principle of least privilege to limit access based on
necessity.
The shared responsibility model in cloud security, where both the cloud provider and the
customer share security responsibilities, can lead to confusion and security gaps. Clear
definitions of security responsibilities in contracts, regular reviews of cloud provider security
documentation, and ongoing collaboration between IT teams and cloud providers are
essential to avoid misunderstandings and ensure comprehensive security coverage.
Application programming interfaces (APIs) are essential for cloud integration and operations,
but can also introduce vulnerabilities. Poorly secured APIs can become entry points for
attackers. To secure APIs, organizations should adopt secure coding practices, use API
gateways to manage and secure API traffic, and implement rate limiting to prevent abuse.
Cloud accounts are vulnerable to hijacking through phishing, credential stuffing, or other
attack methods. Once compromised, attackers can gain control over cloud resources and
data. Enforcing MFA, implementing strong password policies, and continuously monitoring
account activities for suspicious behavior are crucial steps to protect cloud accounts from
hijacking.
Interfaces and APIs, as gateways to cloud services, need robust security measures. If not
properly secured, they can be exploited to gain unauthorized access or disrupt services.
Following best practices in API design and security, conducting regular penetration testing
and vulnerability assessments, and implementing strong authentication and authorization
measures are necessary to secure these critical components.
Adopting cloud technology offers numerous benefits but also introduces a range of security
challenges that organizations must proactively address. By implementing robust security
measures, maintaining regulatory compliance, and fostering a culture of security awareness,
businesses can mitigate these risks and fully use the advantages of the cloud. Security in the
cloud is an ongoing process that requires vigilance, adaptability, and a commitment to staying
ahead of emerging threats.
Employee training and awareness are essential because many attacks exploit human
vulnerabilities. Regular training can help employees recognize phishing attempts and follow
best practices for data security, and act as a front-line defense against cyberthreats.
Endpoint security is crucial as employees access corporate resources from various devices.
Advanced endpoint protection solutions, including anti-virus software, endpoint detection and
response (EDR) tools, and mobile device management (MDM) systems safeguard endpoints
from malicious activity.
Network segmentation limits the spread of ransomware and other threats. Dividing the
network into smaller segments and implementing strong access controls and monitoring can
contain damage and prevent lateral movement by attackers.
Incident response planning is vital for minimizing the impact of attacks. An up-to-date incident
response plan (IRP) with clear communication protocols, roles, and procedures for isolating
affected systems and restoring operations is essential. Regular drills help ensure the
readiness of the response team.
Cyberinsurance provides extra protection, covering the costs of recovery, legal fees, data
restoration, and customer notification if there is an attack.
Cyber resilience encompasses strategies to prepare for, respond to, and recover from
cyberincidents effectively, which include comprehensive risk management to prioritize critical
assets and threats, incident response planning with clear protocols, regular data backups,
and continuous improvement through assessments and updates.
The zero trust model challenges traditional security approaches by assuming no implicit trust
based on network location. Instead, it verifies and validates all devices, users, and
applications attempting to connect, regardless of their location. Key principles include explicit
verification, least privilege access, micro-segmentation to limit lateral movement, and
continuous monitoring of network traffic and user behavior.
In conclusion, cyber resilience and the zero trust model are essential for organizations
striving to fortify their security posture amid a complex threat landscape. By adopting
proactive strategies and integrating zero trust principles into their security framework,
businesses can safeguard critical assets, maintain operational continuity, and mitigate the
impact of cyberattacks. These frameworks strengthen defenses and foster a culture of
security awareness and readiness across the organization, which helps ensure ongoing
protection against evolving cyberthreats.
Cybersecurity frameworks
There are groups of regulations that address cybersecurity requirements, such as the
following ones:
National Institute of Standards and Technology (NIST) Cybersecurity Framework
Developed by the NIST, this framework provides guidelines for improving cybersecurity
practices. Although it is not mandatory, many organizations adopt it to align with best
practices and regulatory expectations.
Federal Information Security Management Act (FISMA)
In the United States, FISMA requires federal agencies and contractors to implement
information security programs and comply with NIST standards.
Overall, government regulations help establish a baseline for security practices, protect
sensitive information, and promote trust in digital systems. Organizations must understand
and comply with these regulations to safeguard their operations and avoid legal
repercussions.
Figure 1-2 illustrates how the IBM Power ecosystem with IBM Power10 processors provides
protection at every layer.
By adopting these practices, users of IBM Power servers can bolster their defenses against
current threats and foster a more resilient posture to adapt to future security challenges.
2 Source: https://events.ibs.bg/events/itcompass2021.nsf/IT-Compass-2021-S06-Power10.pdf
Integrate physical barriers with electronic security measures, such as surveillance and
access control systems to create a comprehensive security envelope around sensitive
hardware. Providing access logging capabilities also helps identify personnel who have
accessed the environment.
Access controls
Access controls are designed to ensure that only authorized individuals can enter specific
physical areas where sensitive hardware is. There are multiple types of access control
systems that work with different authentication methodologies. These systems can be broadly
categorized into the following categories:
Biometric systems
Biometric systems use fingerprint scanners, retina scans, and facial recognition
technologies to provide a high level of security by verifying the unique physical
characteristics of individuals.
Electronic access cards
Technologies such as RFID cards, magnetic stripe cards, and smart cards grant access
based on credentials stored on the card. Many of these technologies can be managed
centrally to update permissions as needed.
Personal identification number (PIN) codes and keypads
Requiring the entry of PINs into keypads provides a method of access control that can be
updated and managed remotely.
Important: Data management and privacy considerations are involved with the collection
and storage of surveillance information. Manage surveillance footage securely, including
storage, access controls, and compliance with privacy laws and regulations to protect the
rights of individuals.
TPM enhances Secure Boot by recording measurements of the system’s firmware and
configuration during the startup process. Through an attestation process, the TPM can
provide a signed quote that can be used to verify the system integrity and firmware
configuration at any time.
TPM also provides key storage and management. It safeguards cryptographic keys at a
hardware level, preventing them from being exposed to outside threats. These keys can be
used for encrypting data and securing communications (for example, during PowerVM Live
Migration).
Secure Boot verifies the integrity of the firmware, boot loader, and operating system to
prevent unauthorized code from running during the boot process. It helps ensure that only
trusted software that is signed with a valid certificate runs, which protects against rootkits and
boot-level malware that might compromise the system’s security before the operating system
starts.
Secure Boot uses digital signatures and certificates to validate the authenticity and integrity of
firmware and software components. Each component in the boot process is signed with a
cryptographic key, and the system verifies these signatures before allowing the component to
run.
Organizations can manage keys and certificates that are used in Secure Boot through
configuration settings, which enables them to control which software and firmware are
trusted.
Secure Boot helps prevent unauthorized code execution during the boot process, which
protects the system from early-stage attacks and aids in meeting compliance requirements
for security standards and regulations that mandate Secure Boot processes.
Hardware encryption
Hardware encryption involves using dedicated processors that perform cryptographic
operations directly within the hardware itself, which enhances security by isolating the
encryption process from software vulnerabilities.
Encryption can be implemented at several layers, which provides protection of your data as it
moves through the system. Power10 provides encryption acceleration that is built in to the
chip. The system can encrypt memory by default within the system with no performance
impact. As data leaves the processor, encryption can be used at the disk level, file system
level, and network level to provide complete protection for your data.
Regular vulnerability assessments are vital for identifying weaknesses in hardware that might
be exploited by attackers or fail under operational stress. These assessments should include
physical inspections, cybersecurity evaluations, and testing against environmental and
operational conditions. Techniques such as penetration testing and red team exercises can
simulate real-world attack scenarios to test the resilience of hardware components.
Protecting your environment should include the usage of continuous monitoring technologies,
including hardware sensors and network monitoring tools, which play a critical role in the
early detection of potential failures or security breaches.
Regular reviews help ensure that risk management strategies and practices stay relevant as
new threats emerge and business needs change. This task involves reevaluating and
updating risk assessments, mitigation strategies, and response plans at defined intervals or
after significant system changes.
Having detailed incident response and recovery plans is essential for minimizing downtime
and restoring functions if there is a hardware failure or a security incident. These plans must
include roles and responsibilities, communication strategies, and recovery steps.
Training programs for IT staff, operators, and other stakeholders that are involved in hardware
management are crucial for maintaining system security. Effective documentation and
reporting are also fundamental to the risk management process. Be transparent in reporting
to stakeholders and regulatory bodies.
1.4.5 Virtualization
Virtualization has become a cornerstone of modern IBM Power servers by enabling enhanced
flexibility and efficiency. However, the shift to virtual environments also introduces specific
security challenges that must be addressed to protect these dynamic and often complex
systems.
The function that enables virtualization in a system is called a hypervisor, also known as a
virtual machine monitor (VMM). The hypervisor is a type of computer software that creates
and runs VMs, which are also called logical partitions (LPARs). The hypervisor presents the
guest operating systems with a virtual operating platform and manages the running of the
guest operating systems. Hypervisors are classified into two types:
A Type 1 hypervisor is a native hypervisor that runs on bare metal.
A Type 2 hypervisor is hosted on an underlying operating system.
The Type 1 hypervisor is considered more secure because it can provide better isolation
between the VMs and generally offers better performance to those VMs.
The following list provides some security implications that must be addressed by the
virtualization layer:
Isolation failures
Because there are multiple VMs running at any one time on the same physical hardware, it
is imperative that the hypervisor maintains strict isolation between VMs to prevent a
breach in one VM from compromising others.
Hypervisor security
The hypervisor is the hardware and software layer that enables virtualization, so it is a
critical security focal point. Ensuring that the hypervisor is secure and kept up to date is
key.
HMC
The HMC is used to configure and manage IBM Power servers. Its capabilities encompass
logical partitioning, centralized hardware management, Capacity on Demand (CoD)
management, advanced server features, redundant and remote system supervision, and
security.
The HMC provides a reliable and secure console for IBM Power servers. It is built as an
appliance on a highly secured system, tied to specific hardware, and not compatible with
other systems. This stringent build process includes incorporating advanced hardware and
software security technologies from IBM. Furthermore, HMCs are closed and dedicated,
meaning that users cannot add their own software. These features work together to create a
highly secure environment.
4.1.0.0 or later
3.1.4.10 or later
PowerVM Virtual I/O Server (VIOS) 3.1.3.10 or later
3.1.2.30 or later
3.1.1.50 or later
7.5 or later
IBM i 7.4 TR5 or later
7.3 TR11 or later
8.4 or later
Red Hat Enterprise Linux (RHEL)
9.0 or later
15.3 or later
SUSE Linux Enterprise Server
12.5 or later
For a full list of operating systems that run on IBM Power, see Operating systems.
Note: Table 1-1 shows the supported operating systems of the Power E1080. For more
information about software maps detailing which versions are supported on which specific
IBM Power server models (including previous generations of IBM Power),
see System Software Maps.
Storage topologies
There are multiple methods of connecting storage to your servers. The different options
evolved over time to meet different requirements and each type has benefits and
disadvantages. They also vary in performance, availability, and price.
In summary, DAS offers high performance and simplicity but can be limited in scalability and
sharing capabilities. Its benefits make it suitable for scenarios where high speed and control
are priorities, and its disadvantages suggest that it might not be ideal for environments
needing extensive collaboration or large-scale storage expansion.
Network-attached storage
NAS is a dedicated file storage system that is connected to a network so that multiple users
and devices can access and share data over the network. NAS devices typically contain one
or more hard disk drives and have their own operating system and management interface.
NAS generally has the following characteristics:
Usage:
– Small and medium businesses (SMBs): SMBs use NAS for file sharing, backup
solutions, and as a centralized repository for documents and other business-critical
data. NAS devices in this context can offer features like user authentication, access
control, and remote access.
– Enterprise environments: In enterprises, NAS systems are used for departmental file
sharing, backup, and collaboration. Advanced NAS devices can support high-capacity
storage, multiple RAID configurations for redundancy, and integration with enterprise
applications.
Benefits:
– Ease of access: NAS provides a centralized location for data, making it accessible from
any device on the network, which facilitates file sharing and collaboration among
multiple users.
– Scalability: NAS systems can be expanded by adding extra drives or connecting
multiple NAS units, which enable scalable storage solutions as data needs grow.
– Cost-effective: NAS is more affordable compared to SAN solutions, and offers a good
balance between performance and cost, especially for SMBs and home users.
– Centralized management: NAS devices come with management interfaces that unable
setup, monitoring, and maintenance. They often include features like data encryption,
access controls, and user management.
– Data redundancy: Many NAS devices support RAID configurations, which provide
redundancy and protection against data loss due to drive failure.
Disadvantages
– Network dependency: NAS performance depends on the network’s speed and
reliability. High network traffic or network issues can impact access speeds and
performance.
– Limited performance: Although NAS provides adequate performance for many
applications, it might not be suitable for high-performance tasks that require fast data
access, such as high-frequency trading or large-scale data processing.
In summary, NAS offers centralized, accessible, and scalable storage solutions that are
suitable for a wide range of environments, from home use to enterprise settings. It excels in
providing file sharing and backup capabilities but can face limitations in performance and
complexity as needs grow. Proper network infrastructure and security measures are crucial
for optimizing NAS performance and protecting data.
In summary, SAN provides high-performance, scalable, and centralized storage solutions that
are ideal for enterprise and data center environments. It excels in performance and reliability,
but can be costly and complex to implement and manage. Organizations that use SANs must
balance their need for high-speed data access with the associated infrastructure and
operational costs.
Cloud storage
Cloud storage refers to the practice of storing data on remote servers that can be accessed
over the internet. Providers manage these servers and offer various services for storing,
managing, and retrieving data. This model contrasts with traditional on-premises storage
solutions, where data is stored locally on physical devices. Cloud storage generally has the
following characteristics:
Usage:
– SMBs: SMBs use cloud storage for file sharing, collaboration, and remote work. It
provides a cost-effective way to scale storage needs without investing in physical
infrastructure.
– Large enterprises: Enterprises use cloud storage for scalable data storage solutions,
disaster recovery (DR), and global access. It supports extensive data needs, facilitates
collaboration, and integrates with various enterprise applications.
– Developers and IT professionals: Cloud storage is used for hosting applications,
managing databases, and providing scalable storage solutions for big data and
analytics.
Benefits:
– Scalability: Cloud storage offers virtually unlimited storage capacity. Users can scale
their storage up or down based on their needs without needing to invest in physical
hardware.
– Accessibility: Data that is stored in the cloud can be accessed from anywhere with an
internet connection. This approach supports remote work, collaboration, and access
from multiple devices.
– Cost-effectiveness: Typically, cloud storage operates on a pay-as-you-go model, so
users pay only for the storage that they use. This approach reduces the upfront costs
that are associated with purchasing and maintaining physical storage hardware.
Cloud storage offers flexible, scalable, and accessible solutions suitable for personal,
business, and enterprise needs. Its benefits include scalability, cost-effectiveness, and
automatic maintenance, which makes it an attractive option for modern data management.
However, concerns about security, reliance on internet connectivity, ongoing costs, and
potential vendor lock-in are important considerations that users must address when opting for
cloud storage solutions.
Here are some best practices for managing storage across various environments, including
on-premises and cloud-based solutions:
Capacity planning:
– Assess needs: Regularly evaluate current and future storage needs based on data
growth projections, application requirements, and business goals.
– Optimize usage: Use tools to monitor storage usage and optimize space. Consider
implementing data deduplication and compression to reduce the amount of storage
required.
Data classification and organization:
– Classify data: Categorize data based on its importance, sensitivity, and usage patterns.
This best practice helps with applying appropriate storage and security policies.
– Organize efficiently: Structure storage systems to facilitate access and retrieval. Use
logical grouping and hierarchical storage management to keep data organized.
Data backup and recovery:
– Regular backups: Implement a consistent backup schedule to help ensure that data is
regularly backed up. Consider full, incremental, and differential backups based on the
needs.
– Test recovery: Periodically test backup and recovery procedures to help ensure that
data can be restored quickly and accurately if there is a loss or corruption.
Data retention policies:
– Define policies: Establish clear data retention policies based on legal requirements,
business needs, and data usage patterns. Determine how long different types of data
should be kept before deletion or archiving.
– Automate management: Use automated tools to enforce retention policies, manage the
data lifecycle, and handle the archiving or deletion of obsolete data.
Performance optimization:
– Monitor performance: Regularly monitor storage performance metrics, such as I/O
operations and response times to identify and address bottlenecks.
– Optimize storage: Use techniques such as tiered storage to allocate high-performance
storage to critical applications while using lower-cost storage for less critical data.
Cost management:
– Budgeting: Develop and adhere to a storage budget that aligns with business needs
and growth projections.
– Cost optimization: Regularly review storage costs and explore cost-saving options,
such as moving infrequently accessed data to lower-cost storage tiers or cloud storage
solutions.
In summary, Safeguarded Copy refers to backup copies of data that are protected to ensure
their integrity, security, and reliability. This feature involves creating consistent and reliable
backups, encrypting and securing backup data, protecting against threats like ransomware,
and automating management processes. By implementing safeguarded copies, organizations
can ensure that their data backups are robust, secure, and capable of supporting effective DR
and data protection strategies.
Important: A Safeguarded Copy is not just a physical copy of the data. It involves
automation and management to take regular copies, validate that they are valid and stored
so that they cannot be modified. Equally important is the ability to quickly recognize when
your data is compromised and recover to a last good state. It also involves business
processes to recover applications and databases to minimize data loss.
1.4.9 Networking
Security considerations for networking involve several key aspects to protect data integrity,
confidentiality, and availability across networked systems. Whether using physical networking
connections or virtualizing the network functions, the considerations are generally the same.
Here are some essential considerations:
Network segmentation
Dividing a network into segments can limit the spread of attacks and contain potential
breaches. Segmentation helps isolate sensitive data and systems from less critical areas.
Firewalls
Firewalls act as barriers between internal networks and external threats. They filter
incoming and outgoing traffic based on predefined security rules.
Intrusion detection and prevention systems
These systems monitor network traffic for suspicious activity and can either alert
administrators or block potential threats.
Encryption
Encrypting data that is transmitted over the network helps ensure that even if data is
intercepted, it remains unreadable without the proper decryption keys.
Access controls
Implementing strong access controls, including MFA and least privilege principles, helps
ensure that only authorized users and devices can access network resources.
Regular updates and patching
Keeping network devices and software up to date with the latest security patches helps
protect against known vulnerabilities and exploits.
Network monitoring
Continuous monitoring of network traffic and device behavior helps detect and respond to
anomalies and potential security incidents in real time.
Secure configuration
Ensuring that network devices (for example, routers and switches) are securely configured
according to best practices reduces the risk of exploitation through bad actor connections
and potential threats, and it helps reduce the risk of human error and social engineering
attacks.
Addressing these considerations helps build a robust network security posture and protect
against various cyberthreats.
Workloads on the IBM Power10 server see significant benefits from improved cryptographic
accelerator performance compared to previous generations. Specifically, the Power10 chip
supports accelerated cryptographic algorithms such as AES, SHA2, and SHA3, resulting in
considerably higher per-core performance for these algorithms. This enhancement enables
features like AIX Logical Volume Encryption to operate with minimal impact on system
performance.
The processor core technology of the Power10 incorporates integrated security protections:
Improved cryptographic performance
Integrated cryptographic support reduces the performance impact of encrypting and
decrypting your data so that you can make encryption pervasive to protect all your critical
data.
Increased application security
Hardened defenses against return-oriented programming (ROP) attacks.
Simplified hybrid cloud security
Setup-free hybrid cloud security administration with a single interface.
Enhanced virtual machine (VM) isolation
Providing the industry's most secure VM isolation technology, this technology defends
against attacks that exploit operating system or application vulnerabilities in the VM to
access other VMs or the host system.
Encryption scope Secures data in memory. Allows computations on Prevents against future
encrypted data. quantum computing
threats.
Use cases Used to protect data in Used for performing Provide long-term data
memory. secure computations on protection and secure
sensitive data without communications in the
decrypting it. future.
Delaying QSE adoption might have severe consequences. Legacy cryptographic systems left
unaltered might be compromised if there is a successful quantum attack, which can expose
sensitive data and risk confidential business transactions and individual privacy. Financial
institutions, critical infrastructure providers, and government agencies might face significant
challenges in maintaining operational integrity and confidentiality. Therefore, prioritizing QSE
implementation is crucial for long-term cybersecurity resilience.
Power10 supports these quantum-safe algorithms to help ensure robust security even as
quantum computing advances.
Power10 implementation
Power10 processors support these quantum-safe algorithms by using the following features:
Crypto engines: Multiple engines per core enable efficient execution of cryptographic
operations.
Software updates: The architecture enables updates to cryptographic libraries, which help
ensure the integration of new quantum-safe algorithms as they become standardized.
The Power10 design and capabilities help ensure robust security against future quantum
threats by using hardware acceleration and flexible software updates to maintain
high-security standards as the cryptographic landscape evolves.
With four times as many AES encryption engines, Power10 processor technology is designed to
offer quicker encryption performance. Power10 is more advanced than IBM Power9
processor-based servers, with updates for the most stringent standards of today and future
cryptographic standards, which include post-quantum and FHE. It also introduces extra
improvements to container security. By using hardware features for a seamless user
experience, TME aims to simplify encryption and support end-to-end security without
compromising performance.
With these coprocessors, you can accelerate cryptographic processes that safeguard and
secure your data while protecting against various attacks. The IBM 4769, IBM 4768, and
IBM 4767 HSMs deliver security-rich, high-speed cryptographic operations for sensitive
business and customer information with the highest level of certification for commercial
cryptographic devices.
Cryptographic Coprocessor cards relieve the main processor from cryptographic tasks. The
IBM HSMs have a PCIe local-bus-compatible interface with tamper responding,
programmable, cryptographic coprocessors. Each coprocessor contains a CPU, encryption
hardware, RAM, persistent memory, hardware random number generator, time-of-day clock,
infrastructure firmware, and software. Their specialized hardware performs AES, DES, DES,
RSA, Elliptic Curve Cryptography, AESKW, HMAC, DES/3DES/AES mandatory access control
(MAC), SHA-1, SHA-224 to SHA-512, SHA-3, and other cryptographic processes. This
hardware relieves the main processor from these tasks. The coprocessor design protects your
cryptographic keys and any sensitive customer applications.
The CHIM workstation connects through secure sessions to the cryptographic coprocessors
to enable authorized personnel to perform the following tasks:
View the coprocessor status.
View and manage the coprocessor configuration.
Manage coprocessor access control (user roles and profiles).
Generate and load coprocessor master keys.
Create and load operational key parts.
IBM i CCA Cryptographic Service Provider (CSP), which is delivered as IBM i Option 35,
supports IBM CHIM Catcher. This support is provided by IBM i Product Temporary Fixes
(PTFs).
The CHIM catcher is controlled like all other TCP servers on IBM i. Use the STRTCPSVR,
ENDTCPSVR, and CHGTCPSVR commands to manage the CHIM catcher. The server application
value for CHIM is *CHIM. The CHIM catcher port is configured with service name “chim”, which
is set to port 50003. The CHIM catcher listens only for incoming connections on localhost. The
CHIM catcher ends itself if no server activity occurs for 1 hour.
Benefits
IBM PCIe Cryptographic Coprocessors have the following benefits:
Keep data safe and secure.
Safeguard data with a tamper-responding design and sensors that protect against module
penetration and power or temperature manipulation attacks.
Choose your platform.
Available on select IBM Z servers with z/OS or Linux; IBM LinuxONE Emperor with
Rockhopper; IBM Power servers; and x86-64 servers with certain Red Hat Enterprise
Linux (RHEL) releases.
Note: At the time of writing, IBM Power supports both the 4769 and 4767 HSMs. The 4769
is available, but the 4767 was withdrawn from marketing.
The remainder of this section covers the 4769 Cryptographic Coprocessor, which is the
available HSM option for IBM Power servers at the time of writing.
The IBM 4769 is available as Customer Card Identification Number (CCIN) C0AF (without a
blind-swap cassette custom carrier) (Feature Code EJ35) and as CCIN C0AF (with
blind-swap cassette custom carrier) (Feature Code EJ37) on IBM Power10 servers, either on
IBM AIX, IBM i, or Linux (RHEL or SUSE Linux Enterprise Server) operating systems. It is
also available as Feature Codes EJ35 and EJ37 on IBM Power9 servers, either on IBM AIX or
IBM i.
The IBM 4769 hardware provides significant performance improvements over its
predecessors while enabling future growth. The secure module contains redundant
IBM PowerPC® 476 processors and custom symmetric key and hashing engines to perform
AES, DES, TDES, SHA-1 and SHA- 2, MD5 and HMAC, and public key cryptographic
algorithm support for RSA and Elliptic Curve Cryptography. Other hardware support includes
a secure real-time clock, a hardware random number generator, and a prime number
generator. The secure module is protected by a tamper-responding design that protects
against various attacks against the server.
The “Payment Card Industry HSM” standard, PCI HSM, is issued by the PCI Security
Standards Council. It defines physical and logical security requirements for HSMs that are
used in the finance industry. The IBM CEX7S with CCA 7.x has PCI HSM certification.2
1
Source: https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4079
2 Source: https://listings.pcisecuritystandards.org/popups/pts_device.php?appnum=4-20358
PowerVM, the virtualization management tool for IBM Power, provides real-time monitoring of
virtualized environments. It helps track performance, resource utilization, and security status
across VMs and physical servers. Also, the Hardware Management Console (HMC) offers
real-time monitoring and management of IBM Power servers. It provides insights into system
health, performance metrics, and potential security issues.
Power10 servers can be configured to generate real-time alerts for various events, including
security incidents, system performance issues, and hardware faults. These alerts can be
integrated with enterprise monitoring solutions for centralized management. Integration with
Security Information and Event Management (SIEM) systems enables real-time analysis of
security events and incidents, which helps in detecting and responding to potential threats as
they occur.
Here are some options within the IBM Power ecosystem to support EDR:
IBM PowerSC is a security and compliance solution that is optimized for virtualized
environments on IBM Power servers running AIX, IBM i, or Linux. PowerSC sits on top of
the IBM Power server stack, which integrates security features that are built at different
layers. You can now centrally manage security and compliance on Power servers for all
IBM AIX and Linux on Power endpoints.
For more information, see 9.3, “Endpoint detection and response” on page 239.
IBM Security® QRadar® EDR remediates known and unknown endpoint threats in near
real time with intelligent automation that requires little-to-no human interaction.
For more information, see “IBM QRadar Suite (Palo Alto Networks)” on page 268.
The Power10 processor architecture incorporates several features to enhance control flow
security and mitigate the risk of ROP attacks. These features include improved
hardware-based encryption and advanced protection against side-channel attacks. Although
these features can mitigate some attacks, they do not make ROP attacks impossible, but
rather more challenging.
Modern compilers and operating systems for Power10 can include extra security features and
mitigations. Developers should ensure that their software is built with the latest security
practices and that the operating system is up to date with relevant patches.
Secure Boot protects the initial program load (IPL) by ensuring that only authorized modules
(ones that are cryptographically signed by the manufacturer) are loaded during the boot
process.
Trusted Boot starts with the platform that is provided by the Secure Boot process and then
builds on it by recording measurements of the system's firmware and configuration during the
startup process to the Trusted Platform Module (TPM). Through an attestation process, the
TPM can provide a signed quote that can be used to verify the system firmware integrity at
any time.
Figure 2-3 illustrates how the different layers work together to support Secure Boot within
Power10.
Secure Boot does not provide protection against the following attacks:
OS-software based attacks to gain unauthorized access to customer data
Rogue system administrators
Hardware physical attacks (for example, chip substitutions, or bus traffic recording)
Secure Boot implements a processor-based chain of trust that is based in the IBM POWER®
processor hardware and enabled by the IBM Power firmware stack. Secure Boot provides for
a trusted firmware base to enhance the confidentiality and integrity of customer data in a
virtualized environment.
Secure Boot establishes trust through the platform boot process. With Secure Boot, the
system starts in a trusted and defined state. Trusted means that the code that runs during the
IPL process originates from the platform manufacturer, and is signed by the platform
manufacturer and has not been modified since. For more information about Secure Boot
processing in PowerVM, see Secure Boot in IBM Documentation or this Secure Boot PDF.
The boot loader validates the kernel's digital signature. The AIX Secure Boot feature uses
Trusted Execution (TE) technology, which relies on the Trusted Signature Database (TSD),
which stores digital signatures of device drivers, application binary files, and other AIX codes.
The feature checks the integrity of boot and initialization codes up to the end of the inittab
file.
The AIX Secure Boot feature is configured by using the management console, with the HMC
supporting it.
The AIX operating system supports the following basic Secure Boot settings:
0 Secure Boot disabled.
1 Enabled (or log only).
2 Enforce (abort the boot operation if signature verification fails).
3 Enforce policy 2 and avoid loading programs or libraries that are not
found in TSD, which also disables write access to /dev/*mem devices.
4 Enforce policy 3 and disable the kernel debugger (KDB).
If file integrity validation fails during the boot operation in Audit mode, the LPAR continues to
boot, but the system administrator logs errors in /var/adm/ras/Secure Bootlog for inspection
after the LPAR starts. When digital signature verification of files fails during the boot in
Enforce mode, the boot process aborts, and the LPAR status is displayed in the HMC with a
specific LED code.
Linux boot images are signed by distributions like Red Hat and SUSE so that PFW can
validate them by using PKCS7 (Cryptographic Message Syntax (CMS)). PKCS7 is one of the
PKCS family of standards that was created by RSA Laboratories. and it is a standard syntax
for storing signed or encrypted data. PowerVM includes the public keys that are used by PFW
to validate the GRUB boot loader.
The PFW verifies the appended signature on the GRUB image before handing control to
GRUB. Similarly, GRUB verifies the appended signature on the kernel image before starting
the OS to ensure that every image that runs at boot time is verified and trusted.
Limitations
Here are the limitations for Secure Boot with Linux at the time of writing:
Key rotations for the GRUB or kernel require a complete firmware update.
Administrators cannot take control of the LPAR and manage their keys.
User-signed custom builds for kernel or GRUB do not start by using static key
management.
Secure Boot enables lockdown in the kernel to restrict direct or indirect access to the
running kernel, which protects against unauthorized modifications to the kernel or access
to sensitive kernel data.
Lock-own impacts some of the IBM Power platform functions that are accessible by using
the user space RTAS interface.
Table 2-2 shows the supported combinations of firmware and Linux distribution.
For more information about Secure Boot with Linux, see Guest Secure Boot with static keys in
IBM Documentation.
For AIX, the Trusted Boot function is handled by the TE functions, as described in 4.9,
“Trusted Execution” on page 115.
Hypervisor vulnerabilities
The first step in securing hypervisors is understanding the unique vulnerabilities that they
face. The following sections describe some of these vulnerabilities and how to avoid them in
your IBM Power environment.
Hyperjacking
Hyperjacking is a type of advanced cyberattack where threat actors take control of the
hypervisor, which handles the virtualized environment within a main computer system. The
actors ultimate aim is to deceive the hypervisor into running unauthorized tasks without
leaving traces elsewhere on the computer.
IBM Power and PowerVM provide many options to help prevent a VM escape type attack.
PowerVM has excellent LPAR isolation that prevents an LPAR from seeing resources outside
of its defined VM.
Resource exhaustion
Attackers can target the resource allocation features of a hypervisor, which can lead to a
denial of service (DoS) by exhausting resources such as CPU and memory, which affects all
VMs that are hosted on the hypervisor.
Within PowerVM, when a resource is defined to an LPAR there are limits that are enforced by
the hypervisor to protect them from overallocation. Defining the minimum, maximum, and
requested values for memory and CPU correctly help ensure that you avoid resource
exhaustion attacks.
Ensure that the administrator credentials that used for configuring PowerVM are protected
and use role-based access control (RBAC) to limit the scope of changes that can be made.
For more information about securing your PowerVM environment, see 3.1, “Hardware
Management Console security” on page 52 and 3.3, “VIOS security” on page 72.
LPAR isolation is a basic tenet of IBM PowerVM, which is the IBM Power hypervisor.
PowerVM can share resources from a single machine across all LPARs that are defined on
that machine. Also, PowerVM has extra capabilities that allow LPARs to be non-disruptively
moved from one host machine to another one to provide load balancing and high availability
(HA) configurations. LPAR restart technologies support disaster recovery (DR) options to
restart workloads at another site if there is a site failure.
The task of managing the virtualization layer in the IBM Power ecosystem is divided into two
distinct areas: hardware management, and I/O virtualization. The hardware management
aspect is handled by the Hardware Management Console (HMC), which is an appliance that
defines the logical partitions (LPARs) in each server, dividing and sharing the installed
resources across the various virtual machines (VMs) that are supported. An HMC can
manage multiple servers, but as your infrastructure grows across multiple locations and many
servers, the Cloud Management Console (CMC) provides a single tool for consolidating
information across several HMCs.
The Virtual I/O Server (VIOS) is a special partition that runs in an IBM Power server that
shares physical devices across multiple LPARs. The purpose of the VIOS is to virtualize the
physical adapters in the system to reduce the number of adapters. Systems with virtualized
I/O can move to other servers as needed for load-balancing and high availability (HA) during
planned or unplanned outages, which provides a more available and resilient environment.
HMC packaging
Initially, the HMC was delivered solely as a traditional hardware appliance, with the software
and hardware bundled together and installed onsite. As client environments grew, there was a
demand to virtualize the HMC function to minimize infrastructure needs. In response, IBM
introduced the virtual Hardware Management Console (vHMC), where you can use your own
hardware and server virtualization to host the IBM-provided HMC virtual appliance. The
vHMC image is available for both x86 and IBM Power servers and supports the following
hypervisors:
For x86 virtualization:
– Kernel-based Virtual Machine (KVM) on Ubuntu 18.04 LTS or Red Hat Enterprise
Linux (RHEL) 8.0 or 9.0
– Xen on SUSE Linux Enterprise Server 12
– VMware ESXi 6.5, 7.0, or 7.0.2
For Power virtualization: PowerVM
The distribution of HMC Service Packs (SPs) and fixes is consistent for both hardware and
vHMCs. However, for vHMC on PowerVM, Power firmware updates are managed by IBM. For
vHMC on x86 systems, if security vulnerabilities arise, consult with the hypervisor and x86
system vendors for any necessary updates to the hypervisor and firmware. The steps for
enabling Secure Boot differ between hardware and vHMCs due to architectural differences.
For more information and detailed instructions about enabling the Secure Boot function, see
3.1.9, “Secure Boot” on page 61.
For more information about the vHMC, see Virtual HMC appliance (vHMC) Overview.
HMC functions
With the HMC, you can create and manage LPARs, which include the ability to dynamically
add or remove resources from active partitions. The HMC also handles advanced
virtualization functions such as Capacity Upgrade on Demand and Power Enterprise Pools.
Also, the HMC provides terminal emulation for the LPARs on your managed systems. You can
connect directly to these partitions from the HMC or configure it for remote access. This
terminal emulation feature helps ensure a reliable connection, which is useful if other terminal
devices are unavailable or not operational. It is particularly valuable during the initial system
setup before you configure your preferred terminal.
By using its service applications, the HMC communicates with managed systems to detect,
consolidate, and relay information to service and support teams for analysis. For a visual
representation of how the HMC integrates into the management and serviceability of
IBM Power servers, see Figure 3-1 on page 53.
One HMC can oversee multiple servers, and multiple HMCs can connect to a single server. If
a single HMC fails or loses connection to the server firmware, the server continues to operate
normally, but changes to the LPAR configuration is not possible. To mitigate this situation, you
can connect an extra HMC as a backup to help ensure a redundant path between the server
and service and support.
Each HMC comes preinstalled with the HMC Licensed Machine Code to help ensure
consistent function. You have two options for configuring HMCs to provide flexibility and
availability:
Local HMC
A local HMC is situated physically close to the system that it manages and connects by
using a private or public network. In a private network setup, the HMC acts as a DHCP
server for the system’s service processors. Alternatively, it can manage the system over
an open network, where the service processor’s IP address is manually assigned through
the Advanced System Management Interface (ASMI).
Remote HMC
A remote HMC is located away from its managed systems, which might be in a different
room, building, or even a separate site. Typically, a remote HMC connects to its managed
servers over a public network, although it can also be configured to connect through a
private network.
IBM created a document that provides a starting point about understanding the connectivity
that is used by the HMC and how to make it secure. The HMC 1060 Connectivity Security
white paper is a good starting point for enabling a secure HMC environment in your
enterprise.
Level 2
Level 2 defines some actions that you should consider when you have multiple HMC users
that are defined in the environment. If you have multiple users that use the HMC, consider the
following items:
HMC supports fine-grained control of resources and roles. Create an account for each
user on the HMC.
Assign only the necessary roles to users.
Assign only necessary resources (systems, partitions, and others) to users.
Both resources and roles that are assigned to the users must follow the least privilege
principle. Create custom roles if necessary.
Enable user data replication between HMCs with different modes.
Import a certificate that is signed by a certificate authority (CA).
Enable Secure Boot.
Enable multi-factor authentication (MFA).
Enable a PowerSC profile.
Level 3
Level 3 defines extra considerations when you have multiple HMCs in the environment. If you
have many HMCs and sysadmins, consider the following items:
Use centralized authentication by using Lightweight Directory Access Protocol (LDAP) or
Kerberos (HMC does not support the single sign-on (SSO) feature).
Enable user data replication between HMCs.
Put HMC in National Institute of Standards and Technology (NIST) SP 800-131A mode so
that it uses only strong ciphers.
Block unnecessary ports in a firewall.
All other ports should be kept within a private or isolated network for security purposes.
To set the NIST SP 800-131A mode, run the following command on the HMC:
chhmc -c security -s modify --mode nist_sp800_131a
If you want to return the HMC to legacy mode, run the following command:
chhmc -c security -s modify --mode legacy
3.1.5 Encryption
All communication channels that are used by the HMC are encrypted. By default, the HMC
employs TLS and HTTPS with secure cipher sets that are bundled with the HMC. The default
ciphers provide strong encryption and are used for secure communication on ports 443,
17443, 2301, and 5250 proxy, and for internal HMC communication.
Note: For more information about the encryption ciphers that are used by the HMC, run the
lshmcencr command in the HMC CLI. If your organization's corporate standards require
different ciphers, use the chhmcencr command to modify them. For more information, see
Managing the HTTPS ciphers of the HMC web interface by using the HMC.
When a user seeks remote access to the HMC user interface through a web browser, they
initiate a request for the secure page by using https://<hmc_hostname>. Then, the HMC
presents its certificate to the remote client (web browser) during the connection process. The
browser verifies the certificate by checking that it was issued by a trusted authority, that it is
still valid, and that it was specifically issued for that HMC.
HMC task roles are either predefined or customized. When you create an HMC user, you
must assign a task role to that user. Each task role grants the user varying levels of access to
tasks that are available on the HMC interface. You can assign managed systems and LPARs
to individual HMC users so that you can create a user that has access to managed system A
but not to managed system B. Each grouping of managed resource access is called a
managed resource role.
Table 3-1 lists the predefined HMC task roles, which are the defaults on the HMC.
hmcviewer A viewer can view HMC information, but cannot change any
configuration Information.
You can create customized HMC task roles by modifying predefined HMC task roles. Creating
customized HMC task roles is useful for restricting or granting specific task privileges to a
certain user.
User Properties
User Properties has the following properties that you can set:
Timeout Values
These values specify values for various timeout situations:
– Session timeout minutes
Specifies the number of minutes, during a logon session, that a user is prompted for
identity verification. If a password is not reentered within the amount of time that was
specified in the Verify timeout minutes field, then the session is disconnected. A 0 is
the default and indicates no expiration. You can specify up to a maximum value
of 525600 minutes (equivalent to 1 year).
– Verify timeout minutes
Specifies the amount of time that is required for the user to reenter a password when
prompted, if a value was specified in the Session timeout minutes field. If the password
is not reentered within the specified time, the session is disconnected. A 0 indicates
that there is no expiration. The default is 15 minutes. You can specify up to a maximum
value of 525600 minutes (equivalent to 1 year).
Password policy
The HMC has default password policies that you can use to meet general corporate
requirements. To meet specific requirements, users can create a custom password policy and
apply it by using the HMC. Password policies are enforced for locally authenticated HMC
users only.
To see what password policies are defined on the HMC, use the lspwdpolicy1 command as
follows:
List all HMC password policies:
lspwdpolicy -t p
List only the names of all HMC password policies:
lspwdpolicy -t p -F name
List the HMC password policy status information:
lspwdpolicy -t s
1 https://www.ibm.com/docs/en/power10/7063-CR1?topic=commands-lspwdpolicy
If you deactivate a password policy, activate another policy to protect your system.
An additional defined policy, “HMC Standard Security Password Policy”, is also available and
might be acceptable for use depending on your corporate requirements. Its settings are
defined as follows:
min_lowercase_chars=1
min_uppercase_chars=1
min_digits=1
min_special_chars=1
pwage=90
min_length=15
If you want to create your own policy, use the mkpwpolicy3 command. Example 3-1 shows an
example of creating a password policy.
The -i flag uses the CLI input to define the parameters of the policy. The -f flag defines the
parameters in a file to simplify the entry of the command and to provide consistency across
your HMCs. When the policy is defined, it still must be activated before it is effective.
Deleting password policies is done by using the rmpwdpolicy command. The -n parameter
specifies the name to delete. For example, rmpwdpolicy -n xyzPolicy deletes the policy
“xyzPolicy”.
2
https://www.ibm.com/docs/en/power10/7063-CR1?topic=commands-chpwdpolicy
3 https://www.ibm.com/docs/en/power10/7063-CR1?topic=commands-mkpwdpolicy
You can use the following attributes to configure these session limits:
Maximum WebUI sessions per user: Specify the maximum number of web user interface
sessions that are allowed for a logged-in user. By default, 100 web user interface sessions
are allowed for a user. The value for maximum WebUI sessions is 50 - 200.
To set the session limit per user, use the following command:
chhmcusr -t default -i “max_webui_sessions_per_user =50”
Console maximum WebUI session: Specifies the maximum number of web user interface
sessions that are allowed in the HMC. By default, 1000 web user interface sessions are
allowed in the HMC. At the time of writing, this parameter is read-only and cannot be
modified.
Most tasks that are performed on the HMC (either locally or remotely) are logged by the HMC.
These entries can be viewed by using the Console Events Log task, under Serviceability →
Console Events Log, or by using the lssvcevents command from the restricted shell.
A log entry contains the timestamp, the username, and the task that is being performed.
When a user logs in to the HMC locally or from a remote client, entries are also recorded. For
remote login, the client hostname or IP address is also captured, as shown in Example 3-2.
A user with the hmcsuperadmin role can use the scp command to securely copy the file to
another system. If you want to copy syslogd entries to a remote system, you may use the
chhmc command to change the /etc/syslog.conf file on the HMC to specify a system to
which to copy. For example, the following command causes the syslog entries to be sent to
the myremotesys.company.com hostname:
chhmc -c syslog -s add -h myremotesys.company.com
The systems administrator must be sure that the syslogd daemon that is running on the
target system is set up to receive messages from the network. On most Linux systems, this
task can be done by adding the -r option to SYSLOGD_OPTIONS in the /etc/sysconfig/syslog
file.
Due to the difference in architecture, different steps are required to enable Secure Boot on
physical and vHMCs:
For the documentation for the Secure Boot feature enablement steps for hardware HMC,
see Enabling secure boot on 7063-CR2 HMC.
For the dedicated steps to enable Secure Boot for vHMC based on VMware ESXi, see
Installing the HMC virtual appliance enabled with secure boot by using VMware ESXi.
For the steps to enable Secure Boot for vHMC for KVM Hypervisor on RHEL, see
Installing the HMC virtual appliance enabled with secure boot by using KVM hypervisor on
RHEL.
For the steps to enable Secure Boot for vHMC for KVM Hypervisor on Ubuntu, see
Installing the HMC virtual appliance enabled with secure boot by using KVM hypervisor on
Ubuntu.
For more information about enabling the PowerSC profile in the HMC, see Enabling PowerSC
profile for the HMC: and HMC hardening profile.
For comprehensive documentation about configuring the HMC to send call home data, testing
problem reporting, managing user authorization, and handling information transmission, see
Configuring the local console to report errors to service and support.
For documentation to enable call home in a 7063-CR2 HMC, see How to configure a new
7063-CR2 HMC.
To enable communication from the HMC to the BMC of the inband connection, configure
credentials to allow the HMC to connect to the BMC for periodic monitoring of hardware
problem events and other management console functions.
Note: The default password auto-expires on first access by the user and must be
changed.
It is a best practice that a local user, other than root, is configured with administrator
privilege to be used for console inband communications. For more information, see How to
add a user to the BMC on the 7063-CR2 HMC.
Note: The Change Expired Password task cannot be selected by the user. It is only
available when the previously provided password has expired. This scenario can be
common for first-time setups where the user has yet to configure the BMC and the
default credentials of root/OpenBMC are still in place.
4. If the current credentials are valid, click Close, and then click Close again to end the task.
If the credentials are failed or not set, then update the credentials by providing a valid
username and password and clicking Set Credentials. If the credentials are accepted,
click Close to exit.
5. If the credentials are expired, clicking Close switches to the Change Expired Password
task. Provide a new password (twice to confirm) to update the new password for the user
on the BMC. Click Change Expired Password.
Hosted in IBM Cloud, the CMC helps ensure secure, anytime access to enable system
administrators to generate reports and gain insights into their Power cloud deployments. The
CMC is required for IBM Power Enterprise Pools 2 (PEP2), which is a cloud solution that
facilitates dynamic resource allocation and pay-as-you-go capacity management within the
IBM Power environment.
CMC is not a single product, but a platform through which IBM delivers applications and
microservices in a DevOps model. The solution is for mobile devices, tablets, and desktop
browsers, which help ensure convenient access for cloud operators.
Cloud Connector is the service that runs on the IBM Power HMC and sends system resource
usage data to the CMC service. The Cloud Connector and the CMC provide the applications
that are shown in Table 3-2 for the IBM Power ecosystem.
Log Trends System log aggregation across the Power enterprise, which
provides a central point to view log trends from Live Partition
Mobility, remote restart, and the lifecycle of LPARs.
Patch Planning Know all your patch planning needs at glance including
firmware, VIOS, OS, and HMC.
Identify all patch dependencies.
Integrated, collaborative planning with stakeholders.
User management is handled by the administrator of the organization that is registered for
IBM CMC for Power services. Administrators can manage users from the Settings page. To
access this page, click the navigation menu icon in the CMC portal header, and then select
the Settings icon. On the Settings page, click the Manage Users tab to view all users that
are configured for your organization. Users without administrator privileges have limited
access to specific applications.
To be added to the CMC, users must have a valid IBMid. In addition to the IBMid, users must
be added to the CMC application by the administrator within your company. The resource role
assignment feature enables administrators to assign appropriate tasks to users.
Resource roles can be managed from the CMC Portal Settings page. On this page, click the
Manage Resource Roles tab to view and manage resource roles. Administrators can add,
modify, or delete resource roles for other users from this page.
Attention: Data filtering for the allowlist is supported only with HMC 1020 or later. If your
version does not meet this requirement, data from systems that are not on the allowlist will
still be uploaded to the CMC.
No List: Selecting No List disables both filtering types and shows data from all managed
systems.
To view the current managed systems on the blocklist or allowlist, click the Blocklist and
Allowlist tabs in the Managed System Filter area.
Important: Only one filter type (Blocklist, Allowlist, or No List) can be active at a time.
Adding a system to the blocklist does not automatically remove its existing data from the
cloud. To purge this data, ensure that Cloud Connector is running, and then run chsvc -s
cloudconn -o stop --purge from the management console CLI.
No List Option
To disable the blocklist and allowlist filters and display data from all managed systems, click
No List.
Data Filter
To keep data in Cloud Connector from getting pushed to cloud storage, add the systems to
this table. Selections are available to filter the System IP Address and Logical Partition/Virtual
IO Server IP Address. These systems can be reenabled if you want.
To enable Attribute Masking, from the Cloud Console interface, select Settings → Cloud
Connector → Cloud Connector Management, scroll down to the end of the page, and then
set Attribute Masking to On.
The Attribute Masking feature is available with HMC 1040 and later only. Data from earlier
HMC versions is not masked and remains unmasked even when attribute masking is enabled.
For more information about what fields are masked, see Attribute Masking.
Cloud Connector connections from the HMC to the CMC Cloud Portal
Cloud Connector provides connections between the HMC in your data center and the CMC
instance. It is a component that uploads data to the CMC cloud. Cloud Connector is
preinstalled on the HMC, but is not started by default. The Cloud Connector can be started by
using a key. To use the key, select CMC Portal → Settings → Cloud Connector Settings.
Important: Starting with HMC 9.1.941.0, the Cloud Connector supports an HTTP proxy. If
you are using earlier versions of the HMC, the Cloud Connector requires a SOCKS5 proxy.
Cloud Connector uses a one-way push model where it initiates all outbound communication.
For an automatic network-based configuration, where Cloud Connector pulls the configuration
file from the Cloud Database, use HTTPS. For an application data flow (push) between Cloud
Connector and the CMC data ingestion node, use TCP with SSL. All communication from the
Cloud Connector to the CMC is secured by using the Transport Layer Security 1.2 (TLSv1.2)
protocol.
Use the startup key for the HMC based Cloud Connector to establish a valid connection
between the connector and the CMC Cloud Portal Server (cloud portal). This key is also used
for a connection between the Cloud Connector and the configuration database. Once a valid
connection is established to the cloud portal, credentials are returned to the Cloud Connector,
which enables dynamic configuration and reconfiguration.
Figure 3-2 shows the Cloud Connector establishing trust with the cloud portal by pushing the
user-provided key to a cloud portal key verification endpoint.
Figure 3-2 Cloud Connector pushes the key to the cloud portal4
Once the key verification is successful, the CMC cloud portal server returns credentials for
pulling the Cloud Connector configuration file and SSL certificates, as shown in Figure 3-3 on
page 69.
4
Source:
https://ibmcmc.zendesk.com/hc/en-us/article_attachments/360083545614/CloudConnectorSecurityWhitePape
r.pdf
A security test runs to assert that the startup key provided is valid. The test begins with a GET
request from the connector to the cloud portal, which returns a cross-site request forgery
(XSRF) header. With this XSRF header, along with a portion of the decoded key, a POST
operation is performed to the same cloud portal endpoint. If the key is considered valid, the
cloud portal responds with a set of encoded credentials that grant Cloud Connector access to
a database containing the customer’s Cloud Connector configuration file.
Cloud Connector connections from the HMC to the CMC Cloud Database
Once Cloud Connector establishes a successful connection with the cloud portal, a secure
SSL connection is established between Cloud Connector and the Cloud Database to fetch the
configuration file. This configuration file contains the following items:
Cloud applications that are enabled by the customer.
Data to push for those applications.
Data to filter (block-listed managed systems, and selected system and partition IP
addresses).
IP address of the cloud data ingestion node.
Also, this file provides credentials for fetching the SSL certificate and key pair that are used in
communication between the Cloud Connector and the cloud data ingestion node. The
credentials are used to access a separate database from the one that is used to fetch the
configuration file. However, the underlying network location and mechanism that are used to
fetch the certificates are the same.
Figure 3-4 Cloud Connector pulls a configuration file from the CMC database4
Once the credentials from the configuration file are collected, as shown in Figure 3-3 on
page 69, the Cloud Connector pulls the SSL certificates and key from the CMC Cloud
Certificate Database, as shown in Figure 3-5.
Figure 3-5 Cloud Connector pulls SSL keys from Cloud Database4
Cloud Connector connections from the HMC to the CMC cloud data
ingestion node
Once the Cloud Connector is configured by the automated configuration process, The Cloud
Collection begins collecting data and pushing that data to the data ingestion node. This
channel is secured by using SSL with mutual authentication that uses the certificate and key
that were noted in Figure 3-5. Using mutual authentication helps ensure that the connector
sends data to only trusted data ingestion nodes. The certificate and key are stored on the
HMC file system, but are accessible only by the root user.
Figure 3-6 on page 71 shows the connection between the HMC and the ingestion node
through the SOCKS5 proxy by using the certificate that was obtained in Figure 3-5. With HMC
9.1.941.0, Cloud Connector can be started only with the HTTP proxy option.
If the Cloud Connector starts with only the HTTP proxy, then it uses the HTTP proxy to
establish a connection between the HMC and the ingestion node, as shown in Figure 3-7. At
the time of writing, the proxy options that are shown in Figure 3-6 are still supported in the
HMCs, when the Cloud Connector starts with both HTTP and SOCKS5 proxies.
Figure 3-7 The Cloud Connector authentication to the CMC data ingestion node in new HTTP proxy
mode4
IBM Power Enterprise Pools 2.0 provides enhanced multisystem resource sharing and
by-the-minute consumption of on-premises compute resources to clients who deploy and
manage a private cloud infrastructure. Power Enterprise Pool 2.0 is monitored and managed
by the IBM CMC. The CMC Enterprise Pools 2.0 application helps you to monitor base and
metered capacity across a Power Enterprise Pool 2.0 environment, with both summary views
and sophisticated drill-down views of real-time and historical resource consumption by
LPARs.
Because all virtualized I/O traffic goes through VIOS, securing it is crucial. If an attacker
compromises VIOS, they might gain access to all virtualized network and storage traffic on
the system and potentially infiltrate client LPARs.
After you deploy VIOS, your first priority is to configure it securely. Many of the security
settings that are applicable to AIX can also be applied to VIOS. Because of VIOS' appliance
nature, if you are unsure about applying specific security configurations, contact IBM Support
for help.
Although VIOS does not have its own published security benchmarks, you can refer to the
Center for Internet Security (CIS) AIX benchmark to guide your VIOS security configuration.
VIOS 3.1 is based on AIX 7.2, and VIOS 4.1 is based on AIX 7.3.
The best way to get information about new VIOS releases, SPs, and security fixes is to
subscribe to VIOS notifications at the IBM Support portal.
Occasionally, VIOS receives security fixes to address newly identified vulnerabilities. Apply
these fixes in accordance with your company's security and compliance policies. Many VIOS
administrators delay installing security updates, and opt to wait for the next SP instead. This
approach might be acceptable if you evaluate the risks that are associated with a
compromised virtualization infrastructure and your organization is prepared to accept those
risks.
Decide which VIOS to update first based on the needs of your client LPARs. To prevent log
messages during the update, you can disable the storage paths to the VIOS that is being
updated.
To help you set up system security when you initially install the VIOS, the VIOS provides the
configuration assistance menu. You can access the configuration assistance menu by running
the cfgassist command. By using the viosecure command, you can set, change, and view
the security settings. By default, no VIOS security levels are set. Run the viosecure
command to change the settings.
The system security hardening feature protects all elements of a system by tightening
security or implementing a higher level of security. Although hundreds of security
configurations are possible with the VIOS security settings, you can implement security
controls by specifying a high, medium, or low security level.
By using the system security hardening features that are provided by VIOS, you can specify
the following values:
Password policy settings
Actions such as usrck, pwdck, grpck, and sysck
Default file-creation settings
Settings that are included in the crontab command
Configuring a system at too high a security level might deny services that are needed. For
example, telnet and rlogin are disabled for high-level security because the login password
is sent over the network unencrypted. If a system is configured at too low a security level, the
system might be vulnerable to security threats. Because each enterprise has its own unique
set of security requirements, the predefined high, medium, and low security configuration
settings are best suited as a starting point for security configuration rather than an exact
match for the security requirements of a particular enterprise. As you become more familiar
with the security settings, you can make adjustments by choosing the hardening rules that
you want to apply. For more information about the hardening rules, run the man command.
To implement system security hardening rules, you can use the viosecure command to
specify a security level of high, medium, or low. A default set of rules is defined for each level.
You can also set a level of default, which returns the system to the system standard settings
and removes any level settings that are applied.
The low-level security settings are a subset of the medium-level security settings, which are a
subset of the high-level security settings. Therefore, high level is the most restrictive setting
and provides the greatest level of control. You can apply all rules for a specified level or select
which rules to activate for your environment. By default, no VIOS security levels are set, so
you must run the viosecure command to modify the settings.
Note: To exit the command without making any changes, type “q”.
To remove the security settings that are applied, run the command viosecure -undo.
The VIOS firewall is not enabled by default. To enable the VIOS firewall, run the viosecure
command with the -firewall option. When you enable the firewall, the default setting is
activated, which enables access for the following IP services:
ftp
ftp-data
ssh
web
https
rmc
cimom
Note: The firewall settings are contained in the viosecure.ctl file in the
/home/ios/security directory. If for some reason the viosecure.ctl file does not exist
when you run the command to enable the firewall, you receive an error. You can use the
-force option to enable the standard firewall default ports.
You can use the default settings or configure the firewall settings to meet the needs of your
environment by specifying which ports or port services to allow. You can also turn off the
firewall to deactivate the settings.
Tip: For more information about securing your VIOS, see IBM Documentation.
The account “admin” has all the privileges to change a VIOS configuration but cannot switch
into oem_setup_env mode and run root commands.
The account “monitor” can log in to VIOS and see the current configuration of the VIOS, but
cannot change the configuration.
The super-administrators with access to oem_setup_env mode have the role PAdmin. All other
administrators have the role Admin. View-only users have the role ViewOnly.
If you want to assign admin privileges to a user, run the following command:
chuser -attr roles=Admin default_roles=Admin pgrp=system admin
expires The expiration date of the account in the format MMDDhhmmyy. If the
value is set to 0, the account does not expire.
histexpire The period in weeks when the user cannot reuse an old password.
histsize The number of previous passwords that the user cannot reuse.
maxexpired The maximum number of weeks where the user can change their
expired password.
maxrepeats The maximum number of times that a character can be repeated in the
password.
minage The minimum number of weeks where the user cannot change the
password after the new password is set.
mindiff The minimum number of characters in the new password that should
differ from the old password.
The option -rmidr remove the user’s home directory. The files on VIOS that are owned by the
removed user do not change their ownership automatically,
The local date, time, user account, and the issued command are saved in the file. An example
is shown in Example 3-6.
This chapter describes some of the basics of locking down your AIX logical partitions
(LPARs). Some security hardening is implemented in AIX 7.3 by default. Usually, you must
check and implement some additional security settings according to your environmental
requirements, which include setting default permissions and umasks, using good usernames
and passwords, hardening the security with AIX Expert, protecting the data at rest directly on
the disk or at the logical volume (LV) layer through encryption, removing insecure daemons,
and integrating with Lightweight Directory Access Protocol (LDAP) directory services or
Microsoft Active Directory (AD)).
Although this checklist is not exhaustive, it provides a solid foundation for developing a
comprehensive security plan that is tailored to your environment. This section covers best
practices and introduces other considerations in the following sections.
Disable the root account’s ability to remotely log in. The root account should be able to log
in only from the system console.
Enable system auditing. For more information, see Auditing overview.
Enable a login control policy. For more information, see Login control.
Disable user permissions for running the xhost command. For more information, see
Managing X11 and CDE concerns.
Prevent unauthorized changes to the PATH environment variable. For more information,
see PATH environment variable.
Disable telnet, rlogin, and rsh. For more information, see TCP/IP security.
Establish user account controls. For more information, see User account control.
Enforce a strict password policy. For more information, see Passwords.
Establish disk quotas for user accounts. For more information, see Recovering from
over-quota conditions.
Later, when the process needs to open an EFS-protected file, these credentials are checked.
If a key matching the file protection is found, the process can decrypt the file key and the file
content. Group-based key management is also supported.
Note: EFS is part of an overall security strategy. It works with sound computer security
practices and controls.
EFS is part of the base AIX operating system. To enable EFS, root or any user with the
role-based access control (RBAC) aix.security.efs authorization must use the efsenable
command to activate EFS and create the EFS environment. For more information about who
can manage EFS, see 4.2.3, “Root access to user keys” on page 82. This action is a one-time
system enablement.
After the EFS is enabled, when a user logs in, their key and keystore are silently created and
secured or encrypted with the user’s login password. Then, the user's keys are used
automatically by the Enhanced Journaled File System (JFS2) for encrypting or decrypting
EFS files. Each EFS file is protected with a unique file key, which is secured or encrypted with
the file owner’s or group’s key, depending on the file permissions. By default, a JFS2 File
System is not EFS-enabled.
When a file system is EFS-enabled, the JFS2 File System transparently handles encryption
and decryption in the kernel for read/write requests. User and group administration
commands (such as mkgroup, chuser, and chgroup) manage the keystores for the users and
groups seamlessly.
Users can change their login password without affecting open keystores and the keystore
password can be different from the login password. When the user password differs from the
keystore password, you must manually load the keystore by using the efskeymgr command.
Keystore details
The keystore has the following characteristics:
Protected with passwords and stored in the Public Key Cryptography Standards (PKCS)
#12 format.
Location:
– User: /var/efs/users/<username>/keystore
– Group: /var/efs/groups/<groupname>/keystore
– efs_admin: /var/efs/efs_admin/keystore
Users can choose the encryption algorithms and key lengths.
Access is inherited by child processes.
Note: The EFS keystore is opened automatically as part of the standard AIX login only
when the user’s keystore password matches their login password. This approach is set up
by default during the initial creation of the user’s keystore. Login methods other than the
standard AIX login, such as loadable authentication modules and Pluggable Authentication
Modules (PAMs), might not automatically open the keystore.
All cryptographic functions come from the CLiC kernel services and CLiC user libraries.
By default, a JFS2 file system is not EFS-enabled. A JFS2 file system must be EFS-enabled
before EFS inheritance can be activated or any EFS encryption of user data can take place. A
file is created as an encrypted file either explicitly with the efsmgr command or implicitly
through EFS inheritance. EFS inheritance can be activated either at the file system level, at
the directory level, or both.
The backup, restore, and tar commands and other related commands can back up and
restore encrypted data, including the EFS metadata that is used for encryption and
decryption.
When backing up EFS encrypted files, you can use the –Z option with the backup command to
back up the encrypted form of the file, along with the file’s cryptographic metadata. Both the
file data and metadata are protected with strong encryption. This approach has the security
advantage of protecting the backed-up file through strong encryption. Back up the keystore of
the file owner and group that are associated with the file that is being backed up. These
keystores are in the following files:
Users keystores /var/efs/users/user_login/*
Group keystore /var/efs/groups//keystore
The efsadmin keystore /var/efs/efs_admin/keystore
To restore an EFS backup that was made with the backup –Z command, use the restore
command. The restore command helps ensure that the crypto metadata is also restored.
During the restore process, it is not necessary to restore the backed-up keystores if the user
has not changed the keys in their individual keystore. When a user changes their password to
open their keystore, their keystore internal key is not changed. Use the efskeymgr command
to change the keystore internal keys.
If the user’s internal keystore key remains the same, the user can immediately open and
decrypt the restored file by using their current keystore. However, if the key that is internal to
the user’s keystore changed, the user must open the keystore that was backed up in
association with the backed-up file. This keystore can be opened with the efskeymgr –o
command. The efskeymgr command prompts the user for a password to open the keystore.
This password is the one that was used in association with the keystore at time of the backup.
For example, assume that user Bob’s keystore was protected with the password foo (the
password ‘foo’ is not a secure password and only used in this example for simplicity sake) and
a backup of Bob’s encrypted files was performed in January along with Bob’s keystore. In this
example, Bob also uses foo for his AIX login password. In February, Bob changed his
password to bar, which also changed his keystore access password to bar. If in March, Bob’s
EFS files were restored, then Bob would be able to open and view these files with his current
keystore and password because he did not change the internal key of the keystore.
However, if it was necessary to change the internal key of Bob’s keystore (with the efskeymgr
command), then by default the old keystore internal key is deprecated and left in Bob's
keystore. When the user accesses the file, EFS automatically recognizes that the restored file
used the old internal key, and EFS uses the deprecated key to decrypt it. During this same
access instance, EFS converts the file to using the new internal key. There is not a significant
performance impact in the process because it is all handled through the keystore and the
file's crypto metadata, and the file data does not need to be re-encrypted.
How do you securely maintain and archive old passwords? There are methods and tools to
archive passwords, which involve using a file that contains a list of all old passwords, and then
encrypting this file and protecting it with the current keystore, which is protected by the current
password. However, IT environments and security policies vary from organization to
organization, and consideration and thought should be given to the specific security needs of
your organization to develop security policy and practices that are best suited to your
environment.
The EA content is not transparent to JFS2. Both user credentials and EFS metadata are
required to determine the crypto authority (access control) for an EFS-activated file.
Note: Be careful in situations where a file or data might be lost, for example, removing the
file's EA.
The scope of the inheritance of a directory is exactly one level. Any newly created child also
inherits the EFS attributes of its parent if its parent directory is EFS-activated. Existing
children maintain their current encrypted or non-encrypted state. The logical inheritance
chain is broken if the parent changes its EFS attributes. These changes do not propagate
down to the existing children of the directory and must be applied to those directories
separately,
If a file system exists, you can enable it for encryption by running the chfs command, for
example:
chfs -a efs=yes /foo
From this point forward, when a user or process with an open keystore creates a file on this file
system, the file will be encrypted. When the user or file reads the file, the file is automatically
decrypted for users who are authorized to access the file.
The LDAP defines a standard method for accessing and updating information in a directory (a
database) either locally or remotely in a client/server model.
The AIX operating system provides utilities to help you perform the following management
tasks:
Export local keystore data to an LDAP server.
Configure the client to use EFS keystore data in LDAP.
Control access to EFS keystore data.
Manage LDAP data from a client system.
All EFS keystore database management commands are enabled to use the LDAP keystore
database. If the system-wide search order is not specified in the /etc/nscontrol.conf file,
keystore operations depend on the user and group efs_keystore_access attribute. If you set
efs_keystore_access to ldap, the EFS commands perform keystore operations on the LDAP
keystore.
Any EFS command When you set the efs_keystore_access attribute to ldap, you
do not need to use the special option -L domain with any
command to perform keystore operations on LDAP.
efsenable Includes the -d Basedn option so that you can perform the
initial setup on LDAP to accommodate the EFS keystore. The
initial setup includes adding base distinguished names (DNs)
for the EFS keystore and creating the local directory structure
(/var/efs/).
efskstoldif Generates the EFS keystore data for LDAP from the following
databases on the local system:
/var/efs/users/username/keystore
/var/efs/groups/groupname/keystore
/var/efs/efs_admin/keystore
• Cookies, if they exist, for all the keystores
All keystore entries must be unique. Each keystore entry directly corresponds to the DN of the
entry that contains the user and group name. The system queries the user IDs (uidNumber),
group IDs (gidNumber), and the DNs. The query succeeds when the user and group names
match the corresponding DNs. Before you create or migrate EFS keystore entries on LDAP,
ensure that the user and group names and IDs on the system are unique.
Before you create or migrate EFS keystore entries on LDAP, ensure that the user and group
names and IDs on the system are unique.
efsusrkeystore This search order is common for all users. LDAP, files
efsgrpkeystore This search order is common for all groups. files, LDAP
efsadmkeystore This search order locates the admin keystore for any LDAP, files
target keystore.
Attention: The configuration that is defined in the /etc/nscontrol.conf file overrides any
values that are set for the user and group efs_keystore_access attribute. The same is true
for the user efs_adminks_access attribute.
After you configure a system as an LDAP client and enable LDAP as a lookup domain for the
EFS keystore data, the /usr/sbin/secldapclntd client daemon retrieves the EFS keystore
data from the LDAP server whenever you perform LDAP keystore operations.
Some organizations must show that data at rest is encrypted. A common example is the
Payment Card Industry Data Security Standard (PCI DSS) requirement to encrypt sensitive
data such as a direct link between a card holder name and card number.
Using LV encryption is similar to physical disk encryption. Once operational, the application
environment does not know that the data is encrypted. The encryption is only noticeable
when the (disk) storage is mounted somewhere else and the data is unreadable. Outside of
the configured environment, information in the LV cannot be accessed.
For more information about the LV encryption architecture, see AIX 72 TL5: Logical Volume
Encryption.
LV encryption is simple to use and transparent to the applications. Once the system starts
and an authorized process or user is active on the system, the data is accessible to
authorized users based on classic access controls such as ACLs.
When enabled (by default, data encryption is not enabled in LVs), each LV is encrypted with a
unique key. Data encryption must be enabled at the volume group level before you can enable
the data encryption option at the LV level. The LV data is encrypted as the data is written to
the physical volume (PV), and decrypted when it is read from the PV.
Enabling LV encryption creates one data encryption key for each LV. The data encryption key
is protected by storing the keys separately in other data storage devices. The following types
of key protection methods are supported:
Paraphrase
Key file
Cryptographic key server
Platform keystore (PKS), which is available in IBM PowerVM firmware starting at firmware
level FW950
Example 4-2 shows a summary of the command usage. For a detailed man page, see the
hdcryptmgr command.
Display:
showlv : Displays LV encryption status
showvg : Displays VG encryption capability
showpv : Displays PV encryption capability
showmd : Displays encryption metadata that is related to the device
showconv : Displays status of all active and stopped conversions
Authentication control:
authinit : Initializes master key for data encryption
authunlock : Authenticates to unlock the master key of the device
authadd : Adds more authentication methods
authcheck : Checks validity of an authentication method
authdelete : Removes an authentication method
authsetrvgpwd : Adds "initpwd" passphrase method to all rootvg's LVs
PKS management:
pksimport : Import the PKS keys
pksexport : Export the PKS keys
pksclean : Removes a PKS key
pksshow : Displays PKS keys status
Conversion:
plain2crypt : Converts an LV to encrypted
crypt2plain : Converts an LV to not encrypted
PV encryption management:
pvenable : Enables the Physical Volume Encryption
pvdisable : Disables the Physical Volume Encryption
pvsavemd : Save encrypted physical volume metadata to a file
pvrecovmd : Recover encrypted physical volume metadata from a file
Note: The bos.hdcrypt and bos.kmip_client file sets are not installed automatically when
you run the smit update_all command or during an operating system migration operation.
You must install it separately from your software source, such as a DVD or an ISO image.
3. Check the encryption state of varied-on volume groups by running the command that is
shown in Example 4-5.
4. Check the volume group encryption metadata by running the command that is shown in
Example 4-6.
2. Check the details of the new volume group by running the command that is shown in
Example 4-8.
3. Check the authentication state of the LV by running the command that is shown in
Example 4-9.
2. Check the authentication status and authentication methods for the LV by running the
command that is shown in Example 4-11.
3. Vary off and vary on the volume group by running the following commands:
# varyoffvg testvg
# varyonvg testvg
4. Check the authentication status of the LV by running the command that is shown in
Example 4-12.
The output in this example shows that the PKS is not activated. The keystore size of an
LPAR is set to 0 by default.
2. Shut down the LPAR and increase the keystore size in the associated Hardware
Management Console (HMC). The keystore size is 4 KB - 64 KB. You cannot change the
value of the keystore size when the LPAR is active.
3. Check the LPAR PKS status again by running the command that is shown in
Example 4-16.
4. Add the PKS authentication method to the LV by running the command that is shown in
Example 4-17.
5. Check the encryption status of the LV by running the command that is shown in
Example 4-18.
6. Check the PKS status by running the command that is shown in Example 4-20 on
page 97.
To add the key server authentication method, complete the following steps:
1. Check the key servers in the LPAR by running the command that is shown in
Example 4-22.
2. Add an encryption key server with the name keyserver1 by running the command that is
shown in Example 4-23.
4. Check the encryption key server information that is saved in the ODM KeySvr object class
by running the command that is shown in Example 4-25.
5. Add the key server authentication method to the LV by running the command that is shown
in Example 4-26.
6. Check the encryption status of the LV by running the command that is shown in
Example 4-28.
3. Check the contents of the testfile file by running the command that is shown in
Example 4-30.
4. Check the encryption status of the LV by running the command that is shown in
Example 4-31.
4.3.8 Migrating the PKS to another LPAR before the volume group is migrated
To migrate the PKS to another LPAR, complete the following steps:
1. Export the PKS keys into another file by running the command that is shown in
Example 4-34.
4. Check whether the authentication method is valid and accessible by running the command
that is shown in Example 4-36.
6. Check whether the authentication method is valid and accessible by running the command
that is shown in Example 4-38.
2. Check the details of the volume group by running the command that is shown in
Example 4-40.
1. Enable the LV encryption by running the command that is shown in Example 4-41.
3. Check the encryption status of the LV by running the command that is shown in
Example 4-43 on page 103.
With AIX 7.3 TL1, IBM continues to address clients’ need to protect data by introducing
encrypted PVs. This capability encrypts data at rest on disks, and because the data is
encrypted in the OS, the disk data in transit is encrypted too.
Install the following file sets to encrypt the PV data. These file sets are included in the base
operating system.
bos.hdcrypt
bos.kmip_client
security.acf
openssl.base
AIX historically supported encrypted files by using the EFS. More recently, AIX 7.2 TL 5
introduced support for LV encryption, as described in 4.3, “Logical volume encryption” on
page 89.
Now, AIX offers a new level of security with PV encryption. With this feature, you can encrypt
the entire PV, which provides enhanced protection for applications that do not rely on volume
groups or LVs, such as certain database applications. However, it is also possible to create
volume groups and LVs on encrypted disks.
The hdcryptmgr command is used to manage encrypted PVs, and the hdcrypt driver handles
the encryption process. Although the core function is similar, some new options and actions
were added to the hdcryptmgr command specifically for PV encryption.
The size of the encrypted PV is smaller than the size of the PV before encryption because the
encryption feature reserves some space on the PV for the encryption process.
When the disk is initialized for encryption and unlocked, it may be used like any other disk in
AIX, except that encrypted disks cannot be used as part of the rootvg volume group. As the
OS writes data to the disk, the data is encrypted; when data is read from the disk, it is
decrypted before being passed to the user.
When the key is stored in a PKS or in a network key manager, the PV is unlocked
automatically during the start process. The authunlock action parameter of the hdcryptmgr
command can be used to manually unlock an encrypted PV. Any attempts to perform an I/O
operation on a locked encrypted PV fails with a permission denied error until that PV is
unlocked.
If the AIX LPAR restarts, encrypted disks that use only the passphrase wrapping key
protection method must be manually unlocked by using the hdcryptmgr authunlock action. If
one of the other methods, such as using a key server or PKS, was added to the disk by using
the authadd action, AIX attempts to automatically unlock the disk during start. Any attempt to
do I/O to an encrypted disk that is still locked fails.
If the data backup operation is running in the operating system instance, the operating system
reads data and decrypts that data before sending it to the backup software. The backup
media contains the decrypted user data. The metadata that is related to encryption is not
stored in the backup media. If this backup data is restored to another PV, data is encrypted
only if encryption is enabled for that PV. If encryption is not enabled for the destination PV, the
restored data is not encrypted and can be used directly even by older levels of AIX.
If data is backed up by using a storage device such as snapshot or IBM FlashCopy®, the data
that is backed up is encrypted. The backup data in the storage device includes both the
encryption metadata and the encrypted user data. The storage-based backup is a
block-for-block copy of the encrypted data and the storage cannot determine that the data is
encrypted by the operating system.
For more information about PV encryption in AIX, see the following resources:
Understanding AIX Physical Volume Encryption
Encrypted physical volumes
Encrypting physical volumes
In addition to the standard UNIX discretionary access control (DAC) AIX has ACLs. ACLs
define access to files and directories more granularly. Typically, an ACL consists of series of
entries that is called an Access Control Entry (ACE). Each ACE defines the access rights for
a user in relationship to the object.
When access is attempted, the operating system uses the ACL that is associated with the
object to see whether the user has the rights to do so. These ACLs and the related access
checks form the core of the DAC mechanism that is supported by AIX.
The operating system supports several types of system objects that enable user processes to
store or communicate information. The most important types of access-controlled objects are
as follows:
Files and directories
Named pipes
IPC objects such as message queues, shared memory segments, and semaphores
The DAC mechanism enables effective access control of information resources and provides
for separate protection of the confidentiality and integrity of the information. Owner-controlled
access control mechanisms are only as effective as users make them. All users must
understand how access permissions are granted and denied, and how they are set.
For example, an ACL that is associated with a file system object (file or directory) can enforce
the access rights for various users regarding access to an object. It is possible that such an
ACL might enforce different levels of access rights, such as read/write, for different users.
Typically, each object has a defined owner, and sometimes are associated with a primary
group. The owner of a specific object controls its discretionary access attributes. The owner's
attributes are set to the creating process's effective UID.
The following list contains direct-access control attributes for the different types of objects:
Owner
For SVIPC objects, the creator or owner can change the object's ownership. SVIPC
objects have an associated creator that has all the rights of the owner (including access
authorization). The creator cannot be changed, even with root authority.
SVIPC objects are initialized to the effective group ID (GID) of the creating process. For
file system objects, the direct-access control attributes are initialized to either the effective
GID of the creating process or the GID of the parent directory (determined by the group
inheritance flag of the parent directory).
Group
The owner of an object can change the group. The new group must be either the effective
GID of the creating process or the GID of the parent directory. (SVIPC objects have an
associated creating group that cannot be changed, and they share the access
authorization of the object group.)
Mode
The chmod command (in numeric mode with octal notations) can set base permissions and
attributes. The chmod subroutine that is called by the command disables extended
permissions. The extended permissions are disabled if you use the numeric mode of the
chmod command on a file that has an ACL. The symbolic mode of the chmod command
disables extended ACLs for the NSF4 ACL type but does not disable extended
permissions for AIXC type ACLs. For more information about numeric and symbolic mode,
see the chmod man page.
Many objects in the operating system, such as sockets and file system objects, have ACLs
that are associated for different subjects. The details of these ACLs for these object types can
vary from one to another.
Traditionally, AIX supported mode bits for controlling access to the file system objects. It also
supported a unique form of ACL for mode bits. This ACL consisted of base mode bits, and
can define multiple ACE entries, which each ACE entry defining access rights for a user or
group for the mode bits. This classic type of ACL behavior is still supported as the AIXC ACL
type.
Most environments require that different users manage different system administration duties.
It is necessary to maintain separation of these duties so that no single system management
user can accidentally or maliciously bypass system security. Although traditional UNIX
system administration cannot achieve these goals, RBAC can.
Beginning with AIX 6.1, a new implementation of RBAC provides for a fine-grained
mechanism to segment system administration tasks. Because these two RBAC
implementations differ greatly in function, the following terms are used:
Legacy RBAC Mode: The historic behavior of AIX roles that apply to versions before AIX 6.1.
Enhanced RBAC Mode: The new implementation that was introduced with AIX 6.1.
Both modes of operation are supported. However, Enhanced RBAC Mode is the default on
newly installed AIX systems after AIX 6.1. The following sections provide a brief description of
the two modes and their differences. They also include information about configuring the
system to operate in the correct RBAC mode.
Legacy RBAC Mode is supported for compatibility, but Enhanced RBAC Mode is the default
RBAC mode. Enhanced RBAC Mode is preferred on AIX.
These integration options center on the use of granular privileges and authorizations and the
ability to configure any command on the system as a privileged command. Features of the
enhanced RBAC mode are installed and enabled by default on all installations of AIX
beginning with AIX 6.1.
The enhanced RBAC mode provides a configurable set of authorizations, roles, privileged
commands, devices, and files through the following RBAC databases. With enhanced RBAC,
the databases can be either in the local file system or managed remotely through LDAP.
Authorization database
Role database
Privileged command database
Privileged device database
Privileged file database
Enhanced RBAC mode introduces a new naming convention for authorizations that allows a
hierarchy of authorizations to be created. AIX provides a granular set of system-defined
authorizations, and an administrator may create more user-defined authorizations as
necessary.
The behavior of roles was enhanced to provide separation of duty functions. Enhanced RBAC
introduces the concept of role sessions. A role session is a process with one or more
associated roles. A user can create a role session for any roles that they are assigned, thus
activating a single role or several selected roles at a time. By default, a new system process
does not have any associated roles. Roles are further enhanced to support the requirement
that the user must authenticate before activating the role to protect against an attacker taking
over a user session because the attacker would need to authenticate to activate the user’s
roles.
The information in the RBAC databases is gathered and verified and then sent to an area of
the kernel that is designated as the Kernel Security Tables (KSTs). The state of the data in
the KST determines the security policy for the system. Entries that are modified in the
user-level RBAC databases are not used for security decisions until this information is sent to
the KST by using the setkst command.
Note: For more information about RBAC on AIX, see Role-based access control.
Note: For AIX users, these commands are available in the IBM AIX Toolbox for Open
Source Software.
In addition to the GNU Public License (GPL), each of these packages includes its own
licensing information, so review the individual tools for their licensing information.
Important: The freeware packages that are provided in the AIX Toolbox for Open Source
Software are made available as a convenience to IBM customers. IBM does not own these
tools; did not develop or exhaustively test them; or provide support for these tools. IBM
compiled these tools so that they run with AIX.
With AIX Security Expert, you can easily apply a chosen security level without the need for
extensive research and manual implementation of individual security elements. Also, the tool
enables you to create a security configuration snapshot, which you can use to replicate the
same settings across multiple systems, which streamline security management and ensure
consistency across an enterprise environment.
AIX Security Expert can be accessed either through SMIT or by using the aixpert command.
AIX Security Expert provides a menu to centralize effective and common security
configuration settings. These settings are based on extensive research on properly securing
UNIX systems. Default security settings are provided for broad security environment needs
(High-Level Security, Medium-Level Security, and Low-Level Security), and advanced
administrators can set each security configuration setting independently.
Configuring a system at too high a security level might deny necessary services. For
example, telnet and rlogin are disabled for High-Level Security because the login password is
sent over the network unencrypted. Conversely, if a system is configured at too low a security
level, it can be vulnerable to security threats. Because each enterprise has its own unique set
of security requirements, the predefined High-Level Security, Medium-Level Security, and
Low-Level Security configuration settings are best used as a starting point rather than an
exact match for the security requirements of an enterprise.
The practical approach to using AIX Security Expert is to establish a test system (in a realistic
test environment) similar to the production environment in which it will be deployed. Install the
necessary business applications and run AIX Security Expert by using the GUI. The tool
analyzes this running system in its trusted state. Depending on the security options that you
choose, AIX Security Expert enables port scan protection, turns on auditing, blocks network
ports that are not used by business applications or other services, and applies many other
security settings. After re-testing with these security configurations in place, the system is
ready to be deployed in a production environment. Also, the AIX Security Expert XML file that
defines the security policy or configuration of this system can be used to implement the same
configuration on similar systems in your enterprise.
Note: For more information about security hardening, see NIST Special Publication
800-70, NIST Security Configurations Checklist Program for IT Products.
For a full description of AIX Security Expert on AIX 7.3, see AIX Security Expert.
The fpm command enables administrators to harden their system by setting permissions for
important binary files and dropping the setuid and setgid bits on many commands in the
operating system. This command is intended to remove the setuid permissions from
commands and daemons that are owned by privileged users, but you can also customize it to
address the specific needs of unique computer environments.
The setuid programs on the base AIX operating system are grouped to enable levels of
hardening. This grouping enables administrators to choose the level of hardening according
to their system environment. Also, you can use the fpm command to customize the list of
programs that must be disabled in your environment. Review the levels of disablement and
choose the right level for your environment.
Perform appropriate testing before using this command to change the execution permissions
of commands and daemons in any critical computer environment. If you encounter problems
in an environment where execution permissions were modified, restore the default
permissions and re-create the problem in this default environment to ensure that the issue is
not due to lack of appropriate execution permissions.
The fpm command provides the capability to restore the original AIX installation default
permissions by using the -l default flag.
Also, the fpm command logs the permission state of the files before changing them. The fpm
log files are created in the /var/security/fpm/log/<date_time> file. If necessary, you can
use these log files to restore the system's file permissions that are recorded in a previously
saved log file.
When the fpm command is used on files that have extended permissions, it disables the
extended permissions, although any extended permission data that existed before the fpm
invocation is retained in the extended ACL.
Customized configuration files can be created and enacted as part of the high, medium, low,
and default settings. File lists can be specified in the /usr/lib/security/fpm/custom/high/*
directory, the /usr/lib/security/fpm/custom/medium/* directory, and the
/usr/lib/security/fpm/custom/default/* directory. To leverage this feature, create a file
containing a list of files that you want to be automatically processed in addition to the fpm
command’s internal list. When the fpm command runs, it also processes the lists in the
corresponding customized directories. To see an example of the format for a customized file,
view the /usr/lib/security/fpm/data/high_fpm_list file. The default format can be viewed
in the /usr/lib/security/fpm/data/default_fpm_list.example file. For the customization of
the -l low flag, the fpm command reads the same files in the
/usr/lib/security/fpm/custom/medium directory, but removes the setgid permissions, but
the -l medium flag removes both the setuid and setgid permissions.
The fpm command cannot run on Trusted Computing Base (TCB)-enabled hosts.
The usual way for a malicious user to negatively impact the system typically involves gaining
unauthorized access and then installing harmful programs like trojans or rootkits, or modifying
sensitive security files, which render the system vulnerable and prone to exploitation. TE aims
to prevent such activities or in cases where incidents do occur, quickly identify them.
Using the functions that are provided by TE, the system administrator can define the exact set
of executable files that are permitted to run or specify the kernel extensions that may load.
Also, you can use TE to examine the security status of the system and identify files that were
updated, which raises the trustworthiness of the system and makes it harder for an attacker to
cause damage.
The set of features under TE can be grouped into the following categories:
Managing a Trusted Signature Database
Auditing the integrity of the Trusted Signature Database
Configuring security policies
Trusted Execution Path and Trusted Library Path
TE is a powerful and enhanced mechanism that overlaps some of the TCB functions and
provides advance security policies to better control the integrity of the system. Although the
TCB is still available, TE introduces a new and more advanced concept of verifying and
guarding the system integrity.
AIX Trusted Execution uses whitelisting to prevent or detect malware that runs on your AIX
system. It provides the following features:
Provides cryptographic checking that you can use to determine whether a hacker replaced
an IBM published file with his trojan horse.
San for root kits.
Detect whether various attributes of a file were altered.
Corrects certain file attribute errors.
Provides whitelisting.
Provides a numerous configuration options.
Detects and prevents malicious scripts, executable files, kernel extensions, and libraries.
Protects files from being altered by a hacker that gains root access.
Provides a function to protect the TE’s configuration from a hacker that gains root access.
Provides a function for using digital signatures to verify that IBM and non IBM published
files were not altered by an attacker.
Integrity Checking reference System and runtime checking System checking only
CHKSCRIPTS: Checks the integrity of shell scripts before running the scripts.
STOP_UNTRUSTD: Does not load files unless they are in the TSD.
STOP_ON_CHKFAIL: If an integrity check fails, the policy does not load a file.
For TE to work, the CryptoLight for C library (CLiC) and kernel extension must be installed. To
see whether it is installed and loaded into the kernel, run the commands that are shown in
Example 4-44.
The TSD database stores the critical security parameters of trusted files that are on the
system. This database is in at /etc/security/tsd/tsd.dat and comes with AIX media. In
TE’s context, trusted files are files that are critical from the security perspective of a system
and, if compromised, can jeopardize the security of the entire system. Typically, the files that
match this definition are as follows:
Kernel (operating system)
All SUID root programs
All SGID root programs
Any program that is exclusively run by root or by a member of the system group
Any program that must be run by the administrator while on the trusted communication
path (for example, the ls command)
The configuration files that control system operation
Any program that is run with the privilege or access rights to alter the kernel
The system configuration files
Every trusted file must ideally have an associated stanza or a file definition that is stored in
the TSD. You can mark a file as trusted by adding its definition to the TSD by using the
trustchk command. You can use this command to add, delete, or list entries from the TSD.
You can lock The TSD so that even root cannot write to it. Locking the TSD becomes
immediately effective.
Example 4-45 shows how the ksh command appears in the TSD database file.
/usr/lib/drivers/igcts:
Owner = root
Group = system
Mode = 555
Type = HLINK
To enable TSD protection, run the commands that are shown in Example 4-46.
The TSD is immediately protected against any modification by either the trustchk command
or by manually editing the file, as shown in Example 4-47.
To enable the TSD for write access again, turn off TE or set tsd_lock to off by using the
trustchk command.
When the system is blocking any untrusted shell scripts by using the CHKSCRIPT policy, as
shown in Example 4-48m make sure that all scripts that are needed by your services are
included in the TSD.
For example, if you are using OpenSSH, make sure that the sshd and ksshd start and stop
scripts in /etc/rc.d/rc2.d are in the TSD. Otherwise, sshd does not start when the system is
restarted, and it will not shut down on system shutdown.
When you try to start a script with chkscript=on and that script is not included in the TSD, its
execution is denied regardless of its permissions, even when root is starting it. This situation
is shown in Example 4-49.
# ls -l foo
-rwx------- root system 17 May 10 11:51 foo
The Trusted Library Path has the same function as Trusted Execution Path, with the only
difference is that it is used to define the directories that contain trusted libraries of the system.
When the Trusted Library Path is enabled, the system loader allows the libraries from this
path to be linked to the commands.
You can use the trustchk command to enable or disable the Trusted Execution Path or
Trusted Execution Library or to set the colon-separated path list for both by using the Trusted
Execution Path and Trusted Library Path command-line interface (CLI) attributes of trustchk.
Important: Be careful when you are changing either the Trusted Execution Path or the
Trusted Library Path. Do not remove the paths from their default settings, which are as
follows:
TEP=/usr/bin:/usr/sbin:/etc:/bin:/sbin:/sbin/helpers/jfs2:/usr/lib/instl:/usr/c
cs/bin
TLP=/usr/lib:/usr/ccs/lib:/lib:/var/lib
If you remove the paths, the system will not restart and function properly because it cannot
access the necessary files and data.
# cp /usr/bin/ls /usr/bin/.goodls
- Hash value of "/usr/bin/ls" command changed
# trustchk -p TE=ON CHKEXEC=ON STOP_ON_CHKFAIL=ON
# ls
ksh: ls: 0403-006 permission denied.
# cp /usr/bin/ls /usr/bin/.badls
# cp /usr/bin/.goodls /usr/bin/ls
# chown bin:bin /usr/bin/ls
# ls
file1 file2 dir1
Adding the STOP_UNTRUSTD=ON option stops executable files that are not listed in
/etc/security/tsd/tsd.dat, as shown in Example 4-51.
With the constant threat of security breaches, companies are under pressure to lock down
every aspect of their applications, infrastructure, and data.
One method of securing IBM AIX network transactions is to establish networks that are based
on the IPsec protocol. IPsec is an IBM AIX network-based protocol that defines how to secure
a computer network at the IP layer. When determining how to secure your IPsec connections,
you might need to consider these items:
The connectivity architecture, whether it is an internal or external connection
Encryption mechanisms or by using authentication services
IBM AIX IPsec uses the mkfilt and genfilt binary files to activate and add the filter rules.
You can also use it to control the filter logging functions, which work on IP version 4 and
IP version 6. With the IPsec feature enabled, you can also create IP filtering rules to block the
IP address from accessing hosts or exact ports.
4.11.1 Auditing
Auditing is the process of examining systems, accounts, or activities to verify accuracy,
compliance, and efficiency. An auditing subsystem records system events to monitor and
analyze transactions, and help ensure transparency and traceability.
By default, auditing is disabled in AIX. When activated, the auditing subsystem begins
collecting information based on your configuration settings. The frequency of auditing
depends on your environment and usage patterns. Although auditing is a best practice for
enhanced security and troubleshooting, the decision to enable auditing and its frequency is
yours.
The audit logger is responsible for constructing the complete audit record, which contains the
audit header, which contains information that is common to all events (such as the name of
the event, the user responsible, and the time and return status of the event), and the audit
trail, which contains event-specific information. The audit logger appends each successive
record to the kernel audit trail, which can be written in either (or both) of two modes:
BIN mode
The trail is written into alternating files, which provide safety and long-term storage.
STREAM mode
The trail is written to a circular buffer that is read synchronously through an audit
pseudo-device. STREAM mode offers immediate response.
Information collection can be configured at both the front end (event recording) and at the
back end (trail processing). Event recording is selectable on a per-user basis. Each user has
a defined set of audit events that are logged in the audit trail when they occur. At the back
end, the modes are individually configurable so that the administrator can employ the
back-end processing that is best suited for a particular environment. In addition, BIN mode
auditing can be configured to generate an alert if the file system space that is available for the
trail is getting too low.
These processing options help manage and analyze audit data effectively.
The STREAM mode audit trail can be monitored in real time to provide an immediate
threat-monitoring capability. Configuring these options is handled by separate programs that
can be started as daemon processes to filter either BIN or STREAM mode trails, although
some of the filter programs are more naturally suited to one mode or the other.
To help ensure that the AIX audit subsystem can retrieve information from the AIX security
audit, set the following files of the AIX server to be monitored:
streamcmds
config
events
objects
For more information about how to configure the AIX audit subsystem for collecting,
recording, and auditing the events, see the following resources:
AIX AUDIT: The Audit Subsystem in AIX
The config file
The events file
Configuring the AIX Audit subsystem
Auditing overview
Enhanced auditing
4.11.2 Accounting
The accounting subsystem provides features for monitoring system resource usage and
billing users for the usage of resources. Accounting data can be collected on various system
resources: processors, memory, disks, and such.
Other data that is collected by the accounting system is connect-time usage accounting,
which lets you know how many users are connected to a system and for how long. You can
use the connect time data to detect unused accounts, which must be invalidated (for security
reasons) or even erased to save resources. Also, you can use the connect-time usage data to
discover suspect activities (such as too many unsuccessful logon attempts) that signal that
security measures should be adopted.
The data that is collected by the accounting subsystem is used to automatically generate
reports, such as daily and weekly reports. The reports can be generated at any time by using
accounting-specific commands. The accounting subsystem provides tools that you can use to
observe how the system reacts at a particular moment in time (for example, when running a
specific command or task).
For more information about how to set up accounting subsystem and accounting internals,
see Administering system accounting.
An event in the AIX Event Infrastructure refers to any detectable change in a system's state or
values by the kernel or its extensions at the moment that the modification takes place. These
events are stored as files within a specialized file system that is known as the pseudo file
system. The AIX Event Infrastructure offers several benefits:
There is no need for constant polling. Users monitoring the events are notified when those
events occur.
Detailed information about an event (such as stack trace, and user and process
information) is provided to the user monitoring the event.
Existing file system interfaces are used so that there is no need for a new application
programming interface (API).
Control is handed to the AIX Event Infrastructure at the time that the event occurs.
IBM POWER7+ was the first IBM POWER processor to include Nest Accelerator (NX) for
symmetric (shared key) cryptography. The accelerators are shared among the LPARs under
the control of the PowerVM Hypervisor, and accessed through a hypervisor call. The internal
NX crypto API calls require extra pages of memory to perform the relevant hypervisor calls.
The impact of NX calls makes them suitable for large data only. A tuning parameter (min_sz)
for data size is implemented to set the minimal data size for NX accelerator operations.
1 Source: https://www.ibm.com/docs/en/aix/7.3?topic=ahafs-aix-event-infrastructure-components
More improvements were made in the IBM Power9 and IBM Power10 processors to increase
the encryption capabilities and improve system performance.
The ACF kernel services are implemented in the PKCS11 device driver (kernel extension),
which provides services for other kernel subsystems like EFS, IPsec, and LV-Encryption.
User space applications can also use ACF kernel services by calling the AIX PKCS #11
subsystem library (/usr/lib/pkcs11/ibm_pkcs11.so).
For more information, see Exploitation of In-Core Acceleration of POWER processors for AIX.
LDAP defines a message protocol that is used by directory clients and directory servers.
LDAP originated from the X.500 Directory Access Protocol, which is considered heavyweight.
X.500 needs the entire OSI protocol stack, and LDAP is built on the TCP/IP stack. LDAP is
considered lightweight because it omits many X.500 operations that are rarely used.
An application-specific directory stores the information that does not have general search
capabilities. Keeping multiple copies of information up-to-date and synchronized is difficult.
What is needed is a common, application-independent directory. You can achieve a single,
common directory by using LDAP. Clients can interact independent of the platform. Also,
clients can be set up without any dependencies.
LDAP works with most vendor directory services, such as Microsoft Active Directory (AD).
With LDAP, sharing information about users, services, systems, networks, and applications
from a directory service to other applications and services is simpler to implement. When
using LDAP, client access is independent of the platform. Because LDAP is a standard
protocol, you can set up clients without any dependency on the specific LDAP server that you
use.
For example, if you have a Microsoft AD (LDAP server), you can configure an LDAP client
with IBM Tivoli Directory Server file sets and access the data from the server.
When you implement a setup to use LDAP, multiple applications such as IBM Verse, intranet
page, BestQuest, RQM, and ClearQuestcan may be connected to a user entry in the same
directory. If a user changes their password once, it is reflected in all applications.
Figure 4-4 Several applications that use the attributes of a single entry
The AIX LDAP load module is fully integrated within the AIX operating system. After the LDAP
authentication load module is enabled to serve user and group information, high-level APIs,
commands, and system management tools work in their usual manner. An -R flag is
introduced for most high-level commands to work through different load modules.
AIX supports LDAP-based user and group management by integrating with IBM Security
Verify Directory servers, non-IBM RFC 2307-compliant servers, and Microsoft AD. As a best
practice, use IBM Security Verify Directory to define AIX users and groups. For more
information about setting up the server, see Setting up an IBM Security Verify Directory
Server.
AIX supports non IBM directory servers. A directory server that is RFC 2307 compliant is
supported, and AIX treats these servers similarly to IBM Security Verify Directory Servers.
Directory servers that are not RFC 2307 compliant can be used, but they require extra
manual configuration to map the data schema. There might be some limitations due to the
subset of user and group attributes in RFC 2307 compared to the AIX implementation. LDAP
Version 3 protocol support is required.
AIX also supports Microsoft AD as an LDAP server for user and group management. For this
setup, the UNIX supporting schema must be installed (it is included in Microsoft Service for
UNIX). AIX supports AD running on Windows 2000, 2003, and 2003 R2 with specific
Microsoft Service for UNIX schema versions.
Some AIX commands might not function with LDAP users if an AD server is used because of
the differences in user and group management between UNIX and Windows systems. Most
user and group management commands (such as lsuser, chuser, rmuser, lsgroup, chgroup,
rmgroup, id, groups, passwd, and chpasswd) should work, depending on access rights.
For more information about installing the IBM Security Verify Directory, see LDAP on AIX:
Step by step instructions for installing the LDAP client file sets on AIX.
A small system might have 3 - 5 users and a large system might have several thousand users.
Some installations have all their workstations in a single, relatively secure area. Others have
widely distributed users, which include users who connect by dialing in and indirect users that
are connected through personal computers or system networks. Security on IBM i is flexible
enough to meet the requirements of this wide range of users and situations.
System security has some important objectives. Each security control or mechanism should
satisfy one or more of the following security goals:
Confidentiality
Integrity
Availability
Authentication
Authorization
Auditing or logging
Confidentiality
Confidentiality concerns include the following items:
Protecting against disclosing information to unauthorized people
Restricting access to confidential information
Protecting against unauthorized system users and outsiders
Integrity
Integrity is an important aspect when applied to data within your enterprise. Integrity goals
include the following items:
Protecting against unauthorized changes to data
Restricting manipulation of data to authorized programs
Providing assurance that data is trustworthy
Availability
Systems are often critical to keep an enterprise running. Availability includes the following
items:
Preventing accidental changes or destruction of data
Protecting against attempts by outsiders to abuse or destroy system resources
Authentication
Ensuring that your data is accessible only by entities that are authorized is one of the basic
tenets of data security. Proper authentication methodologies are important for the following
reasons:
Determine whether users are who they claim to be. The most common technique to
authenticate is by user profile name and password.
Provide extra methods of authentication, such as using Kerberos as an authentication
protocol in a single sign-on (SSO) environment.
Auditing or logging
Auditing and logging are important in discovering and stopping access threats before the
system becomes compromised.
When your security plan is implemented, you must monitor the system for any
out-of-policy security activity and resolve any discrepancies that are created by the
activity.
Depending on your organization and security policy, you might need to issue a security
warning to the person who performed the out-of-policy security activity so that they know
not to perform this action in the future.
System security is often associated with external threats, such as hackers or business rivals.
However, protection against system accidents by authorized system users is often the
greatest benefit of a well-designed security system. In a system without good security
features, pressing the wrong key might result in deleting important information. System
security can prevent this type of accident.
The best security system functions cannot produce good results without good planning.
Security that is set up in small pieces without planning can be confusing and difficult to
maintain and audit. Planning does not imply designing the security for every file, program, and
device in advance. It does imply establishing an overall approach to security on the system
and communicating that approach to application designers, programmers, and system users.
As you plan security on your system and decide how much security that you need, consider
these questions:
Is there a company policy or standard that requires a certain level of security?
Do the company auditors require some level of security?
How important is your system and the data on it to your business?
How important is the error protection that is provided by the security features?
What are your company security requirements for the future?
To facilitate installation, many of the security capabilities on your system are not activated
when your system is shipped. There are best practices this chapter to bring your system to a
reasonable level of security. Always consider the security requirements of your own
installation as you evaluate any best practices.
IBM periodically releases fixes to address issues that are discovered in IBM i programs.
These fixes are bundled into cumulative Product Temporary Fix (PTF) packages, which
contain fixes for specific periods. Consider installing cumulative PTF packages twice a year in
dynamic environments and less frequently in stable ones. Also, apply them when making
major hardware or software changes.
By prioritizing fixes, fix groups, cumulative packages, and high-impact pervasive (HIPER)
fixes, you can prevent security issues that are caused by failing to implement operating
system fixes to correct known issues.
The IBM Navigator for i web-based tool contains technology for doing system management
tasks across one or more systems concurrently. IBM Navigator for i provides wizards that
simplify fix management. You can use the wizards to send, install, or uninstall fixes. You can
also use the compare and update wizard to compare a model system to multiple target
systems to find missing or extra fixes. Also, you can use tools like IBM Administration Runtime
Expert for i and Ansible to compare and automate this process.
Another option for managing fixes is to use an SQL query to identify any issues, as
documented in this IBM document. The query is shown in Example 5-1.
Figure 5-2 shows the QSECURITY panel where the level can be set.
Figure 5-2 The QSECURITY system value and the various security levels on IBM i
System values also provide customization of many characteristics of your IBM i platform. You
can use system values to define system-wide security settings. To access the jobs category
of system values from IBM Navigator for i, select Configuration and Services and then
select System Values, as shown in Figure 5-3 on page 139.
You can restrict users from changing the security-related system values. The Change SST
Security Attributes (CHGSSTSECA) command, System Service Tools (SST), and Dedicated
Service Tools (DST) are options to lock these system values. By locking the system values,
you can prevent even a user with *SECADM and *ALLOBJ authority from changing these system
values with the CHGSYSVAL command. In addition to restricting changes to these system
values, you can also restrict adding digital certificates to a digital certificate store with the Add
Verifier application programming interface (API) and restrict password resetting on the digital
certificate store.
To see a list of all the security-related system values, got IBM Navigator for I and select
Security → Security Config. info. The values are typically related to your security
environment requirement and might differ slightly for every organization.
5.5 Authentication
Authentication is the set of methods that are used by organizations to help ensure that only
authorized personnel, services, and applications with the correct permissions can access
company resources. There are bad actors who want to gain access to your systems with ill
intentions, thus making authentication a critical part of cybersecurity. These bad actors try to
steal credentials from users who have access to your environment.
Authentication helps your organization protect your applications, systems, data, websites, and
networks from internal and external attacks. It also aids in keeping an individual’s personal
data confidential so that they can conduct their everyday business online with less risk. When
authentication systems are weak, attackers can compromise a user account either by
guessing a password or tricking a person into handing over their credentials. This situation
might lead to any of the following risks:
Exfiltration or a data breach
Installation of various types of malware (in enterprise environments, the most prevalent of
them is ransomware)
Non-compliance with different data privacy regulations
To enable an SSO environment, IBM provides two technologies that work together to enable
users to sign in with their Windows username and password and authenticate to IBM i
platforms in the network. Network Authentication Services and Enterprise Identity Mapping
(EIM) are the two technologies that an administrator must configure to enable an SSO
environment. Windows operating systems, AIX, and z/OS use the Kerberos protocol to
authenticate users to the network. A secure, centralized system that is called a key
distribution center authenticates principals (Kerberos users) to the network.
Network Authentication Services allows an IBM i platform to participate in the Kerberos realm,
and EIM provides a mechanism for associating these Kerberos principals to a single EIM
identifier that represents that user within the entire enterprise. Other UIDs, such as an IBM i
username, can also be associated with this EIM identifier. When a user signs on to the
network and accesses an IBM i platform, that user is not prompted for a UID and password. If
the Kerberos authentication is successful, applications can look up the association to the EIM
identifier to find the IBM i username. The user no longer needs a password to sign on to the
IBM i platform because the user is already authenticated through the Kerberos protocol.
Administrators can centrally manage UIDs with EIM, and network users need only to manage
one password. You can enable SSO by configuring Network Authentication Services and EIM
on your system.
The user profile is a powerful and flexible tool. It controls what the user can do and
customizes the way that the system appears to the user. The following sections describe
some of the important security features of the user profile.
Special authority
Special authorities determine whether the user may perform system functions, such as
creating user profiles or changing the jobs of other users. The special authorities that are
available are shown in Table 5-1.
*ALLOBJ All-object (*ALLOBJ) special authority grants access to any resource on the
system, even if private authority exists for the user.
*JOBCTL The Job control (*JOBCTL) special authority allows a user to change the
priority of jobs and of printing, end a job before it has finished, or delete
output before it has printed. *JOBCTL special authority can also give a user
access to confidential spooled output if output queues are specified a as
OPRCTL(*YES).
*SPLCTL Spool control (*SPLCTL) special authority allows the user to perform all
spool control functions, such as changing, deleting, displaying, holding, and
releasing spooled files.
*SAVSYS Save system (*SAVSYS) special authority gives the user the authority to
save, restore, and free storage for all objects on the system, regardless of
whether the user has object existence authority to the objects.
*SERVICE Service (*SERVICE) special authority allows the user to start SST by using
the Start SST (STRSST) command. This special authority allows the user to
debug a program with only *USE authority to the program and perform the
display and alter service functions. It also allows the user to perform trace
functions.
*AUDIT Audit (*AUDIT) special authority gives the user the ability to view and
change auditing characteristics.
*IOSYSCFG System configuration (*IOSYSCFG) special authority gives the user the ability
to change how the system is configured. Users with this special authority
can add or remove communications configuration information, work with
TCP/IP servers, and configure the internet connection server (ICS). Most
commands for configuring communications require *IOSYSCFG special
authority.
Limit capabilities
The Limit capabilities field in the user profile determines whether the user can enter
commands and change the initial menu or initial program when signing on. The Limit
capabilities field in the user profile and the ALWLMTUSR parameter on commands apply only to
commands that are run from the CLI, the Command Entry display, FTP, REXEC, the
QCAPCMD API, or an option from a command grouping menu. Users are not restricted to
performing the following actions:
Run commands in control language (CL) programs that are running a command as a
result of taking an option from a menu.
Run remote commands through applications.
A key component of security is integrity: You must trust that objects on the system were not
tampered with or altered. Your IBM i software is protected by digital signatures.
Signing your software object is important if the object has been transmitted across the
internet or stored on media that you feel might have been modified. The digital signature can
be used to detect whether the object has been altered.
Digital signatures and their usage for verification of software integrity can be managed
according to your security policies by using the Verify Object Restore (QVFYOBJRST) system
value, the Check Object Integrity (CHKOBJITG) command, and the Digital Certificate Manager
(DCM) tool. Also, you can choose to sign your own programs (all licensed programs that are
included with the system are signed).
You can restrict adding digital certificates to a digital certificate store by using the Add Verifier
API and restrict resetting passwords on the digital certificate store. SST provides a new menu
option that is called “Work with system security” where you can restrict adding digital
certificates.
A group profile can own objects on the system. You can also use a group profile as a pattern
when creating individual user profiles by using the copy profile function.
There are two types of encryption keys that are used in SSL/TLS:
Asymmetric keys: The public and private key pair are used to identify the server and
initiate the encrypted session. The private key is known only to the server, and the public
key is shared through a certificate.
Symmetric session keys: Disposable keys are generated for each connection and used to
encrypt and decrypt transmitted data. The symmetric keys are securely exchanged by
using asymmetric encryption.
SSL and TLS support multiple symmetric ciphers and asymmetric public key algorithms. For
example, AES with 128-bit keys is a common symmetric cipher, and RSA and Elliptic Curve
Cryptography commonly use asymmetric algorithms.
Overview
The IBM i system offers multiple SSL/TLS implementations, each adhering to
industry-defined protocols and specifications that are set by the Internet Engineering Task
Force (IETF). These implementations cater to different application needs and offer varying
functions. The specific implementation that is used by an application depends on the chosen
API set.
For Java applications, the configured Java Secure Socket Extension (JSSE) provider
determines the implementation because Java interfaces are standardized. Alternatively, an
application can embed its own implementation for exclusive usage.
Available implementations
Here is a breakdown of the available SSL/TLS implementations on IBM i:
System SSL/TLS:
– Primarily used by ILE applications.
– Certificate management is handled by the DCM. The certificates are stored in
Certificate Management Services (CMS) format (.KDB files).
– Although Java applications can use system SSL/TLS it is not the typical approach.
Even rarer is a Java application that concurrently uses both System SSL/TLS and a
Java Keystore.
System SSL/TLS
System SSL/TLS is a set of generic services that are provided in the IBM i Licensed Internal
Code (LIC) to protect TCP/IP communications by using the SSL/TLS protocol. System
SSL/TLS is tightly coupled with the operating system, and the LIC sockets code specifically
provides extra performance and security.
System TLS has the infrastructure to support multiple protocols. The following protocols are
supported by System TLS:
Transport Layer Security 1.3 (TLSv1.3)
Transport Layer Security 1.2 (TLSv1.2)
Transport Layer Security 1.1 (TLSv1.1)
Transport Layer Security 1.0 (TLSv1.0)
Secure Sockets Layer 3.0 (SSLv3)
The QSSLPCL special value *OPSYS allows the operating system to change the protocols that
are enabled on the system. The value of QSSLPCL remains the same when the system
upgrades to a newer operating system release. If the value of QSSLPCL is not *OPSYS, then the
administrator must manually add newer protocol versions to QSSLPCL after the system moves
to a new release.
For more information about System SSL/TLS support for protocols and cipher suites, see
System SSL/TLS.
Important: It is a best practice to always run your IBM i server with the following network
protocols disabled. Using configuration options that are provided by IBM to enable weak
protocols results in your IBM i server being configured to allow usage of weak protocols.
This configuration results in your IBM i server potentially being at risk of a network security
breach.
TLSv1.1
TLSv1.0
SSLv3
SSLv2
The QSSLCSL system value setting identifies the specific cipher suites that are enabled on the
system. Applications can negotiate secure sessions with only a cipher suite that is listed in
QSSLCSL. No matter what an application does with code or configuration, it cannot negotiate
secure sessions with a cipher suite if it is not listed in QSSLCSL. Individual application
configuration determines which of the enabled cipher suites are used for that application.
To restrict the System TLS implementation from using a particular cipher suite, complete the
following steps:
1. Change the QSSLCSLCTL system value to the special value *USRDFN so that you can edit the
QSSLCSL system value.
2. Remove all cipher suites what you want to restrict from the list in QSSLCSL.
The QSSLCSLCTL system value special value *OPSYS allows the operating system to change the
cipher suites that are enabled on the system. The value of QSSLCSLCTL remains the same
when the system upgrades to a newer operating system release. If the value of QSSLCSLCTL is
*USRDFN, then the administrator must manually add newer cipher suites to QSSLCSL after the
system moves to a new release. Setting QSSLCSLCTL to *OPSYS also adds the new values to
QSSLCSL.
A cipher suite cannot be added to QSSLCSL if the TLS protocol that is required by the cipher
suite is not set in QSSLPCL.
Service tools can be accessed from DSTs or SST. Service tools UIDs are required if you want
to access DST or SST, and to use the IBM Navigator for i functions for disk unit management.
Service tools UIDs are referred to as DST user profiles, DST UIDs, service tools user profiles,
or a variation of these names. Within this topic collection, the term “service tools user IDs” is
used.
Note: For more information about Service Tools for IBM i 7.5, see IBM i 7.5: Security
Service Tools.
The service tools UID that you use to access SST must have the functional privilege to use
SST. The IBM i user profile must have the following authorizations:
Authorization to use the STRSST CL command.
Service special authority (*SERVICE).
To exit from SST, press F3 (Exit) until you get to the Exit SST display, and then press Enter to
exit SST.
The service tools UID that you use to access service tools with DST must have the functional
privilege to use DST. You can start the DST by using function 21 from the system control
panel or by using a manual initial program load (IPL).
Accessing service tools by using the DST from the system control panel
To access service tools by using the DST from the control panel, complete the following
steps:
1. Put the control panel into manual mode.
2. Use the control panel to select function 21 and press Enter. The DST Sign On display
appears on the console.
A digital certificate is an electronic credential that you can use to establish proof of identity in
an electronic transaction. There are an increasing number of uses for digital certificates to
provide enhanced network security measures. For example, digital certificates are essential
to configuring and to using the TLS. Using TLS enables you to create secure connections
between users and server applications across an untrusted network, such as the internet.
TLS provides one of the best solutions for protecting the privacy of sensitive data, such as
usernames and passwords, over the internet. Many IBM i platforms and applications, such as
FTP, Telnet, and HTTP Server, provide TLS support to help ensure data privacy.
Capitalizing on the IBM i support for certificates is simple when you use DCM to centrally
manage certificates for your applications. DCM enables you to manage certificates that you
obtain from any certificate authority (CA). Also, you can use DCM to create and operate your
own local CA to issue private certificates to applications and users in your organization.
Planning and evaluation are the keys to using certificates effectively for their added security
benefits.
As described in 2.1, “Encryption technologies and their applications” on page 30, Power10
provides Transparent Memory Encryption (TME), which transparently encrypts and protects
memory within the system by using the encryption acceleration processors that are built in to
the Power10 processing chip, which provides protection without performance penalties.
IBM i offers various levels of encryption for databases and attached storage devices. By using
Field Procedures within IBM Db2, IBM i provides field-level encryption to directly protect
sensitive data fields within the database. Also, IBM i supports encryption for directly attached
storage devices to safeguard data at rest within the system.
IBM i includes both software cryptography and a range of cryptographic hardware options for
data protection and secure transaction processing. Users can leverage the built-in encryption
acceleration processors on the Power10 chip or integrate specialized cryptographic
coprocessors. Both options provide robust security without compromising performance.
IBM i cryptographic services help ensure data privacy, maintain data integrity, authenticate
communicating parties, and prevent repudiation when a party denies sending a message.
Cryptographic services support a hierarchical key system. At the top of the hierarchy is a set
of master keys. These keys are the only key values that are stored in the clear (unencrypted).
Cryptographic services securely store the master keys within the IBM i LIC.
In addition to the eight general-purpose master keys, cryptographic services supports two
special-purpose master keys:
The auxiliary storage pool (ASP) master key is used for protecting data in the independent
auxiliary storage pool (IASP) (in the Disk Management GUI, the IASP is known as an
Independent Disk Pool).
The save/restore master key is used to encrypt the other master keys when they are saved
to media by using a Save System (SAVSYS) operation.
You can work with Cryptographic Services Key Management by using the IBM Navigator for i
interface, as shown in Figure 5-4.
After you connect to IBM Navigator for i, select Security → Cryptographic Services Key
Management. Then, you can manage master keys and cryptographic keystore files.
You can also use the cryptographic services APIs or CL commands to work with the master
keys and keystore files.
Note: Use TLS to reduce the risk of exposing key values while performing key
management functions.
You can specify detailed authorities, such as adding records or changing records, or you can
use the system-defined subsets of authorities: *ALL, *CHANGE, *USE, and *EXCLUDE.
Files, programs, and libraries are the most common objects that require security protection,
but you can specify authority for any object on the system. The following list describes the
features of resource security:
Group profiles
A group of similar users can share the authority to use objects. For more information, see
5.5.4, “Group profiles” on page 142.
Authorization lists
Objects with similar security needs can be grouped into one list. Authority can be granted
to the list rather than to the individual objects.
Object ownership
Every object on the system has an owner. Objects can be owned by an individual user
profile or by a group profile. The correct assignment of object ownership helps you
manage applications and delegate responsibility for the security of your information.
Primary group
You can specify a primary group for an object. The primary group’s authority is stored with
the object. Using primary groups might simplify your authority management and improve
your authority checking performance.
Library authority
You can put files and programs that have similar protection requirements into a library and
restrict access to that library, which is often simpler than restricting access to each
individual object.
Directory authority
You can use directory authority the same way that you use library authority. You can group
objects in a directory and secure the directory rather than the individual objects.
Object authority
In cases where restricting access to a library or directory is not specific enough, you can
restrict authority access to individual objects.
Public authority
For each object, you can define what access is available for any system user who does not
have any other authority to the object. Public authority is an effective means for securing
information and provides good performance.
The IBM i can log selected security-related events in a security audit journal. Several system
values, user profile values, and object values control which events are logged.
The security audit journal is the primary source of auditing information about the system. This
section describes how to plan, set up, and manage security auditing, what information is
recorded, and how to view that information.
A security auditor inside or outside your organization can use the auditing function that is
provided by the system to gather information about security-related events that occur on the
system.
When a security-related event that might be audited occurs, the system checks whether you
selected that event for audit. If you have, the system writes a journal entry in the current
receiver for the security auditing journal (QAUDJRN in library QSYS).
When you want to analyze the audit information that you collected in the journal, use IBM
Navigator for i or use these SQL commands.
Auxiliary storage is the permanent disk space that you assign to the system in the form of disk
units, which are either physical or virtual. The disk pool can contain objects, libraries,
directories, and many other objects, such as object attributes. The concept of IASP also
forms a base for high availability and disaster recovery (HADR) solutions like PowerHA.
The concept of IASP is straightforward, and there are many solutions that are built around it.
IASPs provide an attractive solution for clients who are looking at server consolidation and
continuous availability with a minimum amount of downtime. Using the IASP provides both
technical and business advantages on IBM i.
The key difference between the system ASP and an IASP is that the system ASP is always
accessible when a system is running, and IASP can be brought online or offline
independently of the system activity on any other pools.
An IASP must be brought online or “varied on” to make it visible to the system before
attempting access data on it. If you want to make the IASP inaccessible to the system, “vary
off” the IASP. The varyon process is not instantaneous and can take several minutes. The
amount of time that is required depends on several factors.
Figure 5-5 shows a system with its SYSBAS or ASP and an IASP that is defined.
IASPs are numbered 33 - 255. Basic ASPs are numbered 2 - 32. All basic ASPs are
automatically available when the system is online and cannot be independently varied on or
off.
Figure 5-6 on page 153 shows a system with a system pool, a user ASP, and an IASP that is
defined.
An IASP can be deployed by using external storage (Storage Area Network (SAN) storage)
and defined on internal disks. However, there are many benefits of using a SAN instead of
internal disks. SAN storage that is combined with an IASP enables cluster configurations
where the replication technologies that are available on the SAN storage can provide HADR
options. Many clients across many different sectors, such as manufacturing, insurance,
aviation, banking, and retail, run with this configuration.
When considering IASP implementation, business needs should be considered first and a
plan should be made to implement them in the client environment. At the application level,
you should have a good understanding about where objects are, who are the users, and how
the program and data is accessed. There are certain types of objects that are supported in
IASP but should remain in the system ASP only to maintain the expected or normal behavior
of the system. Some work management-related changes must be made when you introduce
an IASP. In general there are two environments in which IASP can be used:
Single system environment
In this case, you have an independent disk pool on the local system. It can be brought
online or offline without impacting other storage pools on the system or doing an IPL. This
environment is often used in a local system that contains multiple databases on IASPs.
The IASP can be made available while the system is active without performing an IPL, and
the independent disk pool may remain offline until it is needed. This setup is common if
you want to separate application data; keep historical and archived data on the same
system; maintain multiple application versions; or meet data compliance rules where you
must have data in different pools and keep it offline unless it is needed by the business.
Multi-system environment
In this case, you have one or more IASPs that are shared between multiple IBM i partitions
(on the same system or on different systems, possibly even in another locations) that are
members of the cluster. In this setup, the IASP can be switched between these systems
without an IPL for any of the partitions. This environment is an advantage because it
allows continuous availability of the data. There can be various reasons for implementing
IASPs in multi-system environments. For example, if you are implementing a new HADR
solution, then you normally choose a switchable IASP setup for the most flexible
implementation.
The registration facility provides a central point to store and retrieve information about IBM i
and other exit points and their associated exit programs. This information is stored in the
registration facility repository, and it can be retrieved to determine which exit points and exit
programs exist.
The exit point provider is responsible for defining the exit point information; defining the
format in which the exit program receives data; and calling the exit program. There are four
areas where exit points provide another layer of security.
IBM provides socket exit points that make it possible to develop exit programs for securing
connections to your IBM i by specific ports and IP addresses.
This support is not a replacement for resource security. Function usage does not prevent a
user from accessing a resource (such as a file or program) from another interface. Function
usage support provides APIs to perform the following tasks:
Register a function.
Retrieve information about the function.
Define who can or cannot use the function.
Check to see whether the user may use the function.
To use a function within an application, the application provider must register the functions
when the application is installed. The registered function corresponds to a code block for
specific functions in the application. When the user runs the application, before the
application invokes the code block, it calls the check usage API to verify that the user has the
authority to use the function that is associated with the code block. If the user may use the
registered function, the code block runs. If the user may not use the function, the user is
prevented from running the code block.
The system administrator specifies who may access a function. The administrator can either
use the Work with Function Usage Information (WRKFCNUSG) command to manage the access
to program function, or select Security → Function Usage in IBM Navigator for i.
Separation of duties
Separation of duties helps businesses comply with government regulations and simplifies the
management of authorities. It can divide administrative functions across individuals without
overlapping responsibilities so that one user does not possess unlimited authority, such as
with the *ALLOBJ authority. The QIBM_DB_SECADM function can grant authority, revoke authority,
change ownership, or change the primary group, but without providing access to the object or,
in the case of a database table, to the data that is in the table or allowing other operations on
the table.
QIBM_DB_SECADM function usage can be granted only by a user with the *SECADM special
authority, and it can be granted to a user or a group.
You can use QIBM_DB_SECADM to administer Row and Column Access Control (RCAC). RCAC
can restrict which rows a user may access in a table and whether a user may see information
in certain columns of a table. For more information, see Row and column access control
(RCAC).
Note: For more information about IBM i security, see IBM i 7.5 Security Reference, IBM i
7.5 Security - Planning and setting up system security, and Security.
A major concern is that the root directory “/” is publicly accessible because the default setting
enables full access for public users. After you install the IBM i, the default permission for root
is set to *RWX, which is a considerable risk and should be modified. The IFS enables users to
store and manage various types of data, such as documents, images, program source code,
and more. It offers a unified interface for accessing and managing files across different
platforms, which simplifies the integration of IBM i systems with other systems in a diverse IT
environment. Often, the data that is stored in IFS is sensitive and requires robust security
measures.
Figure 5-7 A structure for all information that is stored in the IBM i
Note: Only objects in file systems that are fully converted to *TYPE2 directories are
scanned.
Figure 5-8 shows setting a file share directory (/ptf) that is secured by limiting access to
members of the authorization list PRODACC.
Figure 5-9 on page 159 shows the interface to display the current access permissions for
directory /ptf. Additional access can be set from this window.
Figure 5-10 shows the definition of a shared directory (FIXES) that points to the /ptf directory
and defines the access as limited to members in the PRODACC group.
Important: Authorization lists do not restrict access to users with the *ALLOBJ special
authority. Any user profile with the *ALLOBJ special authority may access IBM i NetServer
as though there is no authorization list restriction in place. You can use this special
authority to create administrative shares that can be accessed only by IBM i administrative
profiles by specifying an authorization list that lists only public *EXCLUDE.
For more information about IBM i NetServer security, see IBM i NetServer security.
For more information about the IBM i 7.5 IFS, see IBM i 7.5: Files and file systems Integrated
File System.
The IBM Technology Expert Labs team for IBM i Security is an IBM team that specializes in
IBM i security services, such as security assessments, system hardening, and developing
IBM i utilities. This family of utilities is known as Security Compliance Tools for IBM i.
For more information about these offerings, see “Security assessment for IBM Power from
IBM Technology Expert Labs” on page 262.
This chapter provides an overview of Linux on Power by highlighting its unique features and
challenges. It describes various supported Linux environments and offers guidance about
implementing robust security measures to establish a secure and high-performing Linux
system. By combining the strengths of Linux and IBM Power technology, organizations can
benefit from a powerful and flexible infrastructure.
Linux is an open-source based system. In contrast to AIX or IBM i, which experienced fewer
than ten reported vulnerability reports in 2023, the Linux kernel suffered more than a hundred
documented flaws during the same period. Because of its open nature and extensive user
base, this outcome was predictable and it makes the task of protecting Linux workloads even
more critical.
Regarding security, the chapter describes practices, processes, and tools that safeguard
Linux on Power from cyberthreats to help ensure the confidentiality, integrity, and availability
of these systems.
The intricate nature of Linux systems demands a diverse set of tools and methodologies to
effectively reduce the attack surface and bolster defenses against both established and
emerging threats.
Note: In our laboratory setting, we use various distributions, including Red Hat, SUSE, and
Ubuntu, Debian, CentOS, Fedora, Alma, Rocky, and OpenSUSE, which offer robust
support for the ppc64le architecture.
To enhance system security, Linux offers a wide range of potent tools, many of which are
available, open-source, compatible with the ppc64le architecture. By implementing these
tools and adhering to best practices, you can bolster the overall security posture of Power
servers, starting from a basic logical partition (LPAR) that is equipped with Linux and suitable
software configurations.
6.2.1 Malware
Malware, including viruses, worms, trojans, and ransomware, poses significant risks to Linux
on Power servers. These malicious programs can disrupt operations, steal sensitive
information, and cause substantial financial and reputational damage.
IBM was one of the earliest champions of open source. IBM backed influential communities
like Linux, Apache, and Eclipse, and pushed for open licenses, open governance, and open
standards. Beginning in the late 1990s, IBM supported Linux with patent pledges, a $1 billion
investment of technical and other resources, and helped to establish the Linux Foundation in
2000. Since then, IBM has consistently supported open-source initiatives in general, and
Linux and accompanying technologies in particular. Proof of this is IBM’s support of Linux on
IBM hardware, including IBM Power.
IBM Power supports many f Linux distributions, each offering unique features and capabilities.
This section provides an overview of the supported distributions and their main security
features and utilities.
For more information about RHEL, see this Red Hat website.
Debian-based distributions
Debian is a widely-used operating system that is primarily known for its stability, reliability,
security, and extensive software repositories. It is a Linux distribution that consists entirely of
no-charge software. Debian is the foundation for many other distributions, most notably
Ubuntu, which is also supported on Power.
Ubuntu is optimized for workloads in the mobile, social, cloud, big data, analytics, and
machine learning spaces. With its unique deployment tools (including Juju and MAAS),
Ubuntu makes the management of those workloads simpler. Starting with Ubuntu 22.04 LTS,
IBM Power9 and IBM Power10 processors are supported. For more information about the
Ubuntu Server, see Scale out with Ubuntu Server.
SUSE Linux Enterprise Server for IBM POWER is an enterprise-grade Linux distribution that
is optimized for IBM Power servers. It delivers increased reliability and provides a
high-performance platform to meet increasing business demands and accelerate innovation
while improving deployment times.
For more information, see SUSE Linux Enterprise Server for IBM Power.
Supported distributions
Table 6-1 is a table of Linux distributions that are supported by IBM on IBM Power10 servers.
Also listed are the Ubuntu distributions where the support comes directly from Canonical.
9043-MRX (IBM Power E1050) Red Hat Enterprise Linux 9.0, any subsequent RHEL 9.x
9105-22A (IBM Power S1022) releases
9105-22B (IBM Power S1022s) Red Hat Enterprise Linux 8.4, any subsequent RHEL 8.x
9105-41B (IBM Power S1014) releases
9105-42A (IBM Power S1024) SUSE Linux Enterprise Server 15 SP3, any subsequent
9786-22H (IBM Power L1022) SUSE Linux Enterprise Server 15 updates
9786-42H (IBM Power L1024) Red Hat OpenShift Container Platform 4.9 or later
Ubuntu 22.04 or latera
9080-HEX (IBM Power E1080) Red Hat Enterprise Linux 9.0, any subsequent RHEL 9.x
releases
Red Hat Enterprise Linux 8.4, any subsequent RHEL 8.x
releases
Red Hat Enterprise Linux 8.2 (Power9 Compatibility
mode only)b
SUSE Linux Enterprise Server 15 SP3, any subsequent
SUSE Linux Enterprise Server 15 updates
SUSE Linux Enterprise Server 12 SP5 (Power9
Compatibility mode only)
Red Hat OpenShift Container Platform 4.9, or later
Ubuntu 22.04 or latera
9028-21B (IBM Power S1012) Red Hat Enterprise Linux 9.2, for PowerLE, or later
Red Hat OpenShift Container Platform 4.15, or later
Ubuntu 22.04 or latera
a. Ubuntu on Power support is available directly from Canonical.
b. Red Hat Business Unit approval is required for using RHEL 8.2 on IBM Power10
processor-based systems.
IBM Power10 processor-based systems support the following configurations per LPAR:
SUSE Linux Enterprise Server 15 SP4: Up to 64 TB of memory and 240 processor cores
SUSE Linux Enterprise Server 15 SP3: Up to 32 TB of memory and 240 processor cores
Red Hat Enterprise Linux 8.6, or later: Up to 64 TB of memory and 240 processor cores
Red Hat Enterprise Linux 8.4 and 9.0: Up to 32 TB of memory and 240 processor cores
SUSE Linux Enterprise Server 12 SP5 and RHEL 8.2: Up to 8 TB of memory and 120
processor cores
The recommended Linux distribution for a particular server is always the latest level
distribution that is optimized for the server. The listed distributions are the operating system
versions that are supported for the specific hardware. For more information about the product
lifecycles for Linux distributions, see the support site for each distribution.
SUSE Linux Enterprise Server: SUSE Product Support Lifecycle
Red Hat Enterprise Linux: Red Hat Enterprise Linux Life Cycle
Ubuntu: Ubuntu Release Life Cycle
For libraries and tools that can help leverage the capabilities of Linux on Power,
see IBM Software Development Kit for Linux on Power tools. Other information about
packages and migration assistance can be found in Find packages built for POWER in the
Linux on Power developer portal.
Cores are supported in the Red Hat OpenShift Container Platform. For more information
about Red Hat OpenShift Container Platform, see Getting started with Red Hat OpenShift on
IBM Cloud and Architecture and dependencies of the service.
Given the complexity of Linux systems, various tools and methodologies are necessary to
effectively minimize the attack surface and strengthen defenses against both established and
emerging threats.
The security measures and available tools that you use depend on the chosen distribution
and version. This section outlines general principles without focusing on specific
configurations, which might change over time.
This section covers essential aspects of hardening a GNU/Linux OS on IBM Power from a
distribution-neutral perspective. We provide practical examples and guidelines by using
open-source software that is tested on ppc64le, specifically in Debian and Fedora to ensure
that our Linux on Power servers are as secure as possible through an open-source first
approach.
6.4.1 Compliance
Compliance helps ensure that Linux deployments meet the minimum required standards in
terms of configuration, patching, security, and regulatory compliance.
CIS Benchmarks are developed by the CIS and offer best practices for securing a wide range
of systems and applications, including various Linux distributions. They are community-driven
and cover a broad spectrum of security configurations.
DISA STIGs are developed by the DISA and tailored to the stringent security requirements of
the US Department of Defense (DoD). These guides provide highly detailed security
configurations and are mandatory for DoD-related systems. DISA STIGs offer comprehensive
security measures that address potential threats that are specific to defense environments.
Implementing these guidelines helps ensure that systems meet federal security standards
and are protected against sophisticated threats.
For our purpose of providing a good basis for Linux security in Power, we use CIS as a
reference, but other standards such as Payment Card Industry Data Security Standard (PCI
DSS) might be more appropriate depending on the environment.
OpenSCAP, often referred to by its CLI tool oscap, is an open-source framework that
implements SCAP standards. It provides tools to audit and verify system configurations,
vulnerabilities, and compliance against the content that is provided by SCAP, including the
content from the SSG.
3. Automatically address these compliance gaps when technically feasible by using Bash
scripts and Ansible playbooks by using the CLI:
oscap xccdf generate fix --profile [PROFILE_ID] --output remediation_script.sh
\ usr/share/xml/scap/ssg/content/ssg-[OS].xml
Automated remediation might yield unexpected results on systems that already are modified.
Therefore, thoroughly evaluate the potential impact of remediation actions on your specific
systems. You might want to make a snapshot or backup before continuing.
Tip: Under normal conditions, the remediation of compliance issues is the result of several
iterations and some backtracking by recovering snapshots or backups until you reach a
level of security that is adequate for your purposes. Compliance should be balanced
against the usability of the system.
OpenSCAP can also help you check whether there are any vulnerabilities in your current OS
version by running Open Vulnerability and Assessment Language (OVAL) and generating a
report. Example 6-1 shows how you can generate the report by using Debian.
Tip: If you are satisfied with the image that you evaluated and you use IBM PowerVC (the
IBM Power virtualization solution that is based on OpenStack), it is a best practice to
capture this system as a template or create an OVA. Be careful when using “default”
installations because they might be missing important security protection settings. Always
use the appropriate compliance policies to help ensure that your Linux systems running on
IBM Power are all configured and protected.
Firewall technologies
Firewalls are a critical component of network security. They are essential for controlling the
flow of incoming and outgoing traffic based on predefined security rules. Effective firewall
management on Linux systems involves various tools, each offering different levels of control,
efficiency, and simplicity of use. This section explores the primary tools that are used in Linux
firewall implementations, their relationships, and practical guidance on their use.
Linux firewalls have evolved over time. They started with simple packet filtering mechanisms,
and now use more sophisticated management tools. The primary tools that are used in Linux
firewall implementations include iptables, nftables, firewalld, and Uncomplicated Firewall
(UFW). Understanding the background and functions of these tools helps you choose the
correct one for your specific needs (including the distribution that you chose).
Firewall tools
Within the front-end tools for creating and maintaining firewall rules in Linux (by using
iptables or nftables), you have two main options:
firewalld is a dynamic firewall management tool that has been part of RHEL since
Version 7. It simplifies firewall management by using the concept of network zones, which
define the trust level of network connections and interfaces. firewalld enables real-time
changes without restarting the firewall, which provides a flexible and dynamic approach
compared to traditional static tools like iptables. firewalld uses nftables as its back end
by default on modern systems, and firewall-cmd as the CLI tool.
The following command is used to set the firewall:
sudo firewall-cmd --zone=public --add-port=22/tcp --permanent
UFW is designed to provide an interface for managing firewall settings. With its
straightforward command-line interface (CLI), UFW enables users to implement basic
firewall rules with minimal effort. It is useful for users who might not have extensive
networking or firewall management experience but still must ensure system security. UFW
is included by default in Ubuntu and can be installed on Debian systems.
Here is a command to enable SSH to UFW:
sudo ufw allow 22/tcp
Regular reviews and updates of firewall rules are a best practice to maintain compliance with
security policies and adapt to emerging threats. These measures collectively aim to fortify
Linux systems against various network-based threats.
A best practice is to forward Linux firewall logs to a solution that automates their analysis,
alerts, and responses. For an example that uses IBM QRadar, see this document.
Example 6-2 show a simple firewall configuration on Linux on Power by using firewall-cmd.
# Enable UFW
sudo ufw enable
Tip: Be careful not to lock yourself out. Although the new rules do not apply to existing
connections, make sure that you either have a script to disable the firewall automatically
after a few seconds or direct console access in an emergency.
Also, CIS emphasizes the importance of logging and auditing firewall activity to detect and
respond to suspicious behavior, and suggests using stateful inspection and rate limiting to
prevent attacks like denial of service (DoS).
This use case uses Suricata because it has strong support for the ppc64le architecture.
Suricata is a versatile and high-performance Network Security Monitoring (NSM) tool that can
detect and block network attacks. By default, Suricata operates as a passive IDS by scanning
for suspicious traffic on a server or network and generating logs and alerts for further
analysis. Also, you can configure it as an active IPS to log, alert, and block network traffic that
matches specific rules. Suricata is open source and managed by the Open Information
Security Foundation.
##
## Step 3: Configure common capture settings
##
## See "Advanced Capture Options" below for more options, including Netmap
## and PF_RING.
##
Suricata includes a tool that called suricata-update that can fetch rule sets from external
providers. Example 6-5 shows how to download the latest rule set for your Suricata server.
Encryption in transit
Encrypting data in transit protects it from being intercepted and read by unauthorized parties.
Protocols such as Secure Sockets Layer (SSL)/Transport Layer Security (TLS) are used to
secure communications over networks.
SSL/TLS are secure protocols for encrypting web traffic, email, and other
communications.
To secure your web server with SSL/TLS, obtain a digital certificate. Certbot is an
automated tool that is designed to streamline the process of acquiring and installing
SSL/TLS certificates. It is one of many technology projects that are developed by the
Electronic Frontier Foundation (EFF) to promote online freedom.
Certbot is available in different Linux repositories, including the ppc64le versions, making
installation straightforward, It has plug-ins for both Apache and nginx, among other
deployments, and includes a tool to automatically renew these certificates.
Example 6-6 shows how to install Certbot in a Debian Linux system. Other Linux versions
might differ slightly.
To obtain and automatically install the certificate for your web server, run either sudo
certbot –apache or sudo certbot --nginx, depending on which tool you use.
Certbot prompts you to enter your email address and agree to the terms of service. It
interacts with your web server to perform the domain verification process and install the
SSL/TLS certificate.
For more information and how-to guides, see the Certbot website.
Virtual private networks (VPNs) create encrypted tunnels to provide secure remote
access. Setting up a VPN is essential for protecting communications and helping ensure
the privacy of transmitted data. VPNs are highly recommended by the CIS as a best
practice for securing remote connections.
To install and configure OpenVPN on Linux on Power, follow the specific guides for
different Linux on Power distributions:
– Debian-based
– RHEL-based
– SUSE-based
Encryption at rest
Encryption at rest is a form of encryption that is designed to prevent an attacker from
accessing data by ensuring that the data is encrypted when stored on a persistent device.
You can do this task at different layers, from physical storage systems to the OS. If you
choose to encrypt at the OS level, employ full disk encryption by using Linux Unified Key
Setup (LUKS) with LVM (Debian / RHEL-Based) or BTRFS (SUSE).
LUKS offers a suite of tools that are designed to simplify the management of encrypted
devices. LUKS enables encryption of block devices and supports multiple user keys that can
decrypt a master key. This master key is used for the bulk encryption of the partition.
Example 6-7 provides an example of activating encryption at rest for a logical volume (LV) by
using lvm.
WARNING!
========
This will overwrite data on /dev/sdX irrevocably.
Linux uses Pluggable Authentication Modules (PAMs) in the authentication process, serving
as an intermediary layer between users and applications. PAM modules are accessible on a
system-wide basis, allowing any application to request their services. The PAM modules
implement most of the user security measures that are defined in various files within the /etc
directory, including Lightweight Directory Access Protocol (LDAP), Kerberos and Active
Directory (AD) connections, or MFA options.
Access control mechanisms help ensure that only authorized users can access specific
resources, which include configuring SUDO, managing user groups, and maintaining access
logs.
Password policies
Enforcing strong password policies is crucial to prevent unauthorized access. Policies should
mandate complex passwords, regular password changes, and account lockout mechanisms
after multiple failed login attempts.
A password policy can specify the minimum length that a password must have and the
maximum duration that it can be used before it must be changed. All users under this policy
must create passwords that are long enough and update them regularly. Implementing
password policies helps mitigate the risk of passwords being discovered and misused.
A minimum password policy for Linux on Power should contain at least the following
characteristics:
Complex passwords: Require a mix of uppercase, lowercase, numbers, and special
characters. This policy can be enforced with the pam_pwquality PAM module. CIS
recommends that passwords should be at least 14 characters long with no limit on the
enforced maximum number of characters, among other requirements.
# The maximum credit for having digits in the new password. If less than 0 it is the
minimum number of digits in the new password.
dcredit = -1
# The maximum credit for having uppercase characters in the new password. # If less than 0
it is the minimum number of uppercase characters in the new
# password.
ucredit = -1
..
#
# If defined, all su activity is logged to this file.
#
SULOG_FILE var/log/sulog
Account lockout: Typically, it is recommended by CIS to lock the account after five
unsuccessful attempts and to unlock it automatically after a specified period, such as
15 minutes. These settings help mitigate the risk of unauthorized access by deterring
repeated login attempts and ensuring that legitimate users can regain access after a brief
lockout period.
This policy is configured in the /etc/pam.d/common-auth file (Debian, Ubuntu, and SUSE)
or /etc/pam.d/system-auth file (RHEL). The pam_faillock module performs a function
similar to the legacy pam_tally and pam_tally2 but with more options and flexibility. Check
which is the recommended method in your chosen distribution and version.
Here is an excerpt of a sample legacy configuration (SUSE):
auth required pam_tally2.so onerr=fail deny=3 unlock_time=1800
Groups
Grouping users based on their roles and responsibilities helps manage permissions.
Assigning users to appropriate groups helps ensure that they have access only to the
necessary resources. For example, a file with permissions rw-rw---- (660) allows the owner
and the group to read/write the file, but others cannot access it. These permissions reduce
the risk of accidental or malicious modifications to sensitive files.
This way, developers can be part of a dev group with access to development files and the
production team is part of a prod group with access to production files.
CIS advises regular audits of group memberships to help ensure that users have the
appropriate permissions and to remove any unnecessary or outdated group assignments.
Also, the creation of custom groups for specific tasks or roles is recommended to further
refine access control and minimize potential security risks.
Example 6-11 shows the process to grant read/write permissions to another user (in this
case, john). After setting the ACL for the user, use the getfacl command to display the ACL
as shown in the example.
Multi-factor authentication
MFA is a security process that requires users to verify their identity through multiple methods
before gaining access to a system or application. Unlike single-factor authentication (for
example, a password), MFA enhances security by combining two or more independent
credentials from the following categories:
Something that you know: Typically, a password or personal identification number (PIN)
Something that you have: A physical device, such as a smartphone
Something that you are: Biometric verification like fingerprints, facial recognition, or iris
scans
When attempting to log in to a system that is secured by MFA, users must supply extra
credentials beyond their standard username and password. In the context of Linux systems,
SSH serves as a common method for remotely accessing the system. To enhance security
further, incorporate MFA when you use SSH.
One method of implementing MFA is using IBM PowerSC. However, MFA can also be
implemented by using native tools like Google authenticator by using the following
commands:
google-authenticator libpam-google-authenticator for Debian-based systems
google-authenticator-libpam in SUSE-based systems
google-authenticator in Extra Packages for Enterprise Linux (EPEL) for RHEL-based
systems
For more information about adding MFA to other distributions, see the following resources:
Configure SSH to use two-factor authentication
Set up two-factor authentication for SSH on Fedora
RBAC offers a more detailed approach to identity and access management (IAM) compared
to ACLs, yet it is simpler and more straightforward to implement than attribute-based access
control (ABAC). Other IAM methods, such as mandatory access control (MAC) or
discretionary access control (DAC), can be effective for particular scenarios.
The following list provides some methods to help with setting the appropriate access controls
within your system:
SELinux (RHEL/SUSE-based) defines roles and the assignment of domains (or types) to
these roles. Users are assigned roles, and the roles define the allowable operations on
objects within the system, which makes it a RBAC-like solution. SELinux uses security
policies that are label-based, identifying applications through their file system labels.
SELinux might be complex to configure and manage. For more information, see GitHub.
AppArmor (Debian-based) employs security profiles that are path-based, identifying
applications by their executable paths. AppArmor does not have a traditional RBAC
approach, but can define profiles for applications, which can be seen as a form of access
control. For more information, see AppArmor.
Each solution offers different features and complexities so that administrators can choose the
most appropriate tool based on their specific security requirements and environment. Red
Hat based distributions come preconfigured with several SELinux policies, but the
configuration might be more complex than FreeIPA or AppArmour. RHEL roles are typically
part of automation policies.
To implement group access control by using sudo, complete the following steps:
1. Determine the different roles in the organization and the specific permissions or
commands that each role needs.
2. Create UNIX groups corresponding to each role. For example, admin, developer, auditor,
and others. Example 6-12 shows adding groups.
4. Edit the sudoers file to grant permissions to groups. To do this task, use the visudo
command to ensure proper syntax and prevent mistakes:
sudo visudo
In the sudoers file, define the commands that each group can run. Example 6-14 shows
the group permissions.
You can now use the sudo command to run commands based on user roles, as shown in
Example 6-15.
In this example, the admin role has full control over the system; the developer role grants
access to development tools like git, make, and gcc; and the auditor role has read-only
access to logs and configuration files.
You can also use ausearch with aureport for detailed reports, as shown in Example 6-19. The
command is as follows:
sudo aureport -k
AIDE helps to monitor and verify the integrity of files and directories on a system. It helps
detect unauthorized changes, such as modifications, deletions, or additions, by creating a
database of file attributes and comparing the current state to the baseline. It is a File Integrity
Monitor that initially was developed as a no-cost and open source replacement for Tripwire
that is licensed under the terms of the GNU General Public License.
AIDE also runs daily through the /etc/cron.daily/aide crontab, and the output is emailed to
the user that is specified in the MAILTO= directive of the /etc/default/aide configuration file.
Set up a cron job for regular checks by running the following command:
sudo crobtab -e 0 0 * * * /usr/bin/aide --check
AIDE can determine what changes were made to a system, but it cannot determine who
made the change, when the change occurred, and what command was used to make the
change. To discover this information, use auditd and ausearch.
By combining these tools, you establish a robust system for logging, integrity checking, and
auditing. This multi-layered approach enhances the security and integrity of your Linux
installation on the ppc64le architecture, providing early detection of potential security
incidents and unauthorized changes.
Tip: Forwarding these events to a Security Information and Event Management (SIEM) or
remote log solution (including PowerSC Trusted Logging on Virtual I/O Server (VIOS)) is a
best practice to help ensure that these logs are tamper-proof and cannot be modified or
deleted. This situation applies to any other log or audit file of security interest.
Integrating Linux on Power with SIEM tools such as IBM QRadar requires several steps to
help ensure that logs from the Linux systems are collected, transmitted, and ingested by the
SIEM platform. The same steps apply if you use a remote log collector or other observability
tool to centralize the logs from different environments for their secure storage and analysis.
syslog-based approach
syslog is a protocol that was created in the 1980s. It remains the default on OpenBSD. There
are two options that are available:
syslog-ng
syslog-ng was developed in the late 1990s as a robust replacement for syslog. It
introduced support for TCP, encryption, and numerous other features. syslog-ng became
the standard and was included in distributions such as SUSE, Debian, and Fedora for
many years.
Example 6-20 shows a configuration example to forward auth logs to a remote SIEM on IP
1.2.3.4 (Debian-based) by using syslog-ng.
Another approach is to send JSON or field-based logs to IBM QRadar without using
traditional syslog daemons or after storing messages in a database. You can accomplish this
task by using a tool like Fluentd or even your own scripts in Python.
Fluentd is an extensively deployed open-source log collector that is written in Ruby. It stands
out for its versatile pluggable architecture. This design enables it to seamlessly connect to a
broad array of log sources and storage solutions, including Elasticsearch, Loki, rsyslog,
MongoDB, Amazon Web Services (AWS) S3 object storage, and Apache Kafka, among
others.
Figure 6-7 shows how Fluentd can help with log management.
To prevent these threats from infecting Linux systems, you can use Comprehensive Malware
Detection and Removal (ClamAV) to detect and remove various forms of malware through
regular scans and real-time protection, and chkrootkit can identify and report signs of
rootkits. Both tools enhance security by helping to ensure that the system remains free from
unauthorized access and malicious activity.
Both tools are open source and available for the ppc64le architecture.
Virus detection
There are a couple of options for virus detection on IBM Power.
ClamAV
ClamAV is a versatile and powerful open-source antivirus engine that is designed to detect
trojans, viruses, malware, and other malicious threats. It offers several features that make it a
valuable tool for enhancing Linux system security:
Regular scans: You can configure ClamAV to perform regular scans of the system, which
helps ensure that any new or existing malware is detected and addressed.
Real-time protection: With the ClamAV daemon, you can enable real-time scanning to
monitor file activity continuously, providing immediate detection and response to potential
threats.
Automatic updates: ClamAV includes an automatic update mechanism for its virus
definitions, which helps ensure that the system is protected against the latest threats.
Cross-platform support: ClamAV supports multiple platforms, making it a flexible solution
for various environments working on Linux on Power but also in AIX and IBM i
(IBM Portable Application Solutions Environment for i (PASE for i)).
To install and configure ClamAV on a Debian-based Linux system, complete the following
steps:
1. Install ClamAV by running the following command:
sudo apt-get install clamav clamav-daemon
2. Update the ClamAV database by running the following command:
sudo freshclam
3. Start the ClamAV daemon by running the following command:
sudo systemctl start clamav-daemon
4. Schedule a daily scan (add this line to your crontab) and send a report by email by running
the following command:
[email protected] 0 1 * * * /usr/bin/clamscan -ri --no- summary /
Powertech Antivirus
Offered by Fortra (previously Help Systems), Powertech Antivirus is a commercially available
antivirus solution providing native scanning for IBM systems, including IBM i, AIX, and Linux
on Power (it also supports Linux on IBM Z, LinuxONE, and Linux on x86).
Powertech Antivirus offers both on-demand and scheduled scanning, enabling you to balance
security and system performance. Compatible with IBM, Fortra, and third-party scheduling
solutions, you can customize the scan frequency and target directories. Powertech Antivirus
can be run independently on each endpoint or it can be centrally managed.
Rootkit detection
The tool chkrootkit is a rootkit detector that checks for signs of rootkits on UNIX-based
systems. It scans for common signatures of known rootkits and helps ensure that the system
remains uncompromised.
You can scan for many types of rootkits and detect certain log deletions by using chkrootkit.
Although it does not remove any infected files, it does specifically tell you which ones are
infected so that you can remove, reinstall, or repair the file or package.
IBM Storage Protect is a comprehensive, enterprise-grade data protection solution that can
safeguard critical data across diverse environments. Designed to automate the processes of
data backup, recovery, and archiving, IBM Storage Protect helps ensure that business-critical
information remains secure, available, and verifiable. Its robust architecture and extensive
feature set make it an ideal choice for organizations of all sizes, from small businesses to
large enterprises. Both the clients and the server itself are supported on Linux on the ppc64le
architecture. For more information, see the IBM Storage Protect documentation.
Using Foreman and Katello together with Red Hat Satellite provides one option for update
management.
Foreman is an open-source lifecycle management tool that system administrators can use to
manage servers throughout their lifecycle, from provisioning and configuration to monitoring
and management. It provides an integrated, comprehensive solution for managing large-scale
infrastructures.
Katello is a plug-in for Foreman that adds content management and subscription
management capabilities. Administrators can use to manage software repositories, handle
updates, and ensure compliance with subscription policies.
Using Red Hat Satellite along with Foreman and Katello manages package and patch
lifecycles, including update distribution. Using Red Hat Satellite in this environment can
initiate updates. However, Ansible offers a more comprehensive automation solution for
keeping systems up to date:
Performs prechecks, backups, and snapshots.
Initiates patch updates.
Restarts systems.
Conducts post-checks for complete patch automation.
6.4.10 Monitoring
Monitoring Linux on Power servers is vital to help ensure their security, supplementing the
specialized tools that are described in this chapter and providing more insights.
There are many options for monitoring Linux on Power servers. Most commercial and
community solutions have ppc64le agents. For example, consider Pandora FMS. There are
also solutions that support monitoring Linux on Power and also fit into a complete monitoring
infrastructure across all your IBM Power workloads running on Linux, AIX, and IBM i, where
you can visualize the status any partition, and generate alerts that can be redirected to a
centralized monitoring environment.
One of the simplest options for this multiple architecture monitoring tool is nmon. nmon was
originally written for AIX, and is now an integrated tool within AIX. A version for Linux was
written by IBM and later released as open source for Linux across multiple platforms,
including x86, IBM Power, IBM Z, and even ARM. There are multiple integrations for using
and analyzing nmon data, including charts and spreadsheet integrations. There is even a
newer version (nmonj) that saves the performance data in JSON format for better integration
into databases and for web browser graphing.
There are more projects being developed by using Python and other languages that are
portable between architectures. Some of them have good export capabilities to InfluxDB,
Cassandra, OpenTSDB, StatsD, Elasticsearch, or RabbitMQ.
Nagios
The next level is deploying complete monitoring environments, such as Nagios or Zabbix.
These frameworks support extensive customization and scalability. Their source code can be
downloaded and compiled on IBM Power, with agents and integrations for both AIX and IBM i.
Some of them have enterprise support options or require licenses from some instances.
IBM Instana
In the field of commercial monitoring solutions, we highlight IBM Instana™®. It leverages
various open-source projects to provide advanced monitoring and observability capabilities,
making it an excellent enterprise-supported solution for monitoring Linux on Power (ppc64le),
AIX, and IBM i.
IBM Instana integrates with technologies such as Apache Kafka for real-time data processing;
Prometheus for metrics collection; Grafana for data visualization; OpenTelemetry for tracing
and metrics; Elasticsearch, Logstash, and Kibana (ELK) for log management; Kubernetes for
container orchestration; and Jenkins for continuous integration and delivery.
With support for Debian, Red Hat, and SUSE on ppc64le, Instana helps ensure
comprehensive, real-time visibility into the performance and health of applications and
systems that is backed by IBM's robust enterprise support.
System hardening
Implement the following best practices:
Minimal installation: Begin with a minimal base installation to reduce the attack surface.
Install only the necessary software and services, which reduce the surface attack by
limiting the installed software and services. It is simpler to add software than to remove it.
Compliance: Use tools such as OpenSCAP or PowerSC to help ensure minimum levels of
compliance in all systems. This task can be accomplished by generating a base image
and being rigorous with change control by using tools such as Ansible for configuration
management. This approach enforces consistent security policies across all systems.
Access control
Implement the following best practices:
User authentication: Implement strong authentication mechanisms, including MFA. Use
SSH key pairs instead of passwords for remote access, with a second method of
authentication.
RBAC: Assign permissions based on roles rather than individual users. sudo is a powerful
tool to do it locally.
Password policies: Enforce strong password policies, including complexity requirements,
expiration, and account lockout mechanisms.
Data protection
Implement the following best practices:
Encryption: Use encryption for data at rest and in transit. Implement SSL/TLS for network
communications and encrypt sensitive files on disk.
Backup strategies: Regularly back up critical data and test restore procedures. Use tools
like Bacula or IBM Storage Protect for automated backups.
Summary
In summary, layered security provides enhanced safety. However, even the most robust
defenses have weaknesses, which make their effectiveness dependent on the least secure
component. Achieving the right balance between security and usability is essential. Although
technologies advance and operating systems change, core problems remain and new ones
emerge.
A well-defined IRP is crucial for minimizing the impact of security incidents. Here are the key
components of an effective IRP:
Preparation: Define roles and responsibilities for an incident response that are specific to
Linux on Power environments. Ensure that all team members are trained and familiar with
the response procedures for this architecture. Make sure that you have a clearly defined
architecture where the people who specialize in each technology (PowerVM, SUSE, Red
Hat, Ubuntu, databases, applications, storage, and communications) understand the Linux
on Power environment.
Identification: Implement monitoring and alerting mechanisms to quickly identify potential
security incidents in the Power environment. Use log analysis and SIEM tools that are
tailored for the ppc64le architecture to detect anomalies. Ensure compatibility and
optimization of these tools for the Power architecture.
Containment: Develop strategies for containing incidents to prevent further damage. This
component might involve isolating affected Power servers or networks. Consider the
specific containment techniques that are suitable for Power hardware, such as leveraging
virtualization features to isolate affected LPARs, VLANs, or shared storage.
Eradication: Identify and remove the root cause of the incident in the Power environment.
This component might involve applying patches, removing malware, or addressing
configuration issues that are specific to ppc64le systems. Ensure that the incident
response team is familiar with patch management and malware removal tools that are
compatible with Linux on Power.
Recovery: Restore affected Power servers to normal operation. This component might
involve restoring data from backups, rebuilding compromised LPARs, or reconfiguring
network settings that are specific to the Power architecture. Ensure that recovery
procedures are tested and validated for ppc64le environments.
Lessons learned: After resolving an incident, conduct a postmortem analysis to identify
lessons that are learned and improve future response efforts. Update IRPs and security
policies based on findings, considering any unique aspects of the Power environment.
Document any architecture-specific issues and resolutions to enhance future readiness.
Tip: Regularly test and update your IRP to ensure that it remains effective and relevant to
the threat landscape.
Red Hat OpenShift is a unified platform to build, modernize, and deploy applications at scale.
Work smarter and faster with a complete set of services for bringing apps to market on your
choice of infrastructure. Red Hat OpenShift delivers a consistent experience across public
cloud, on-premises, hybrid cloud, or edge architectures.
Red Hat OpenShift offers you a unified, flexible platform to address several business needs,
such as an enterprise-ready Kubernetes orchestrator to a comprehensive cloud-native
application development platform that can be self-managed or used as a fully managed cloud
service.
Figure 7-1 shows how Kubernetes is only one component (albeit a critical one) in Red Hat
OpenShift.
Built by open source leaders, Red Hat OpenShift includes an enterprise-ready Kubernetes
solution with a choice of deployment and usage options to meet the needs of your
organization. From self-managed to fully managed cloud services, you can deploy the
platform in the data center, in cloud environments, and at the edge of the network. With Red
Hat OpenShift, you can get advanced security and compliance capabilities, end-to-end
management and observability, and cluster data management and cloud-native data
services. Red Hat Advanced Cluster Security for Kubernetes modernizes container and
Kubernetes security, enabling developers to add security controls early in the software
lifecycle. With Red Hat Advanced Cluster Management for Kubernetes, you can manage your
entire application lifecycle and deploy applications on specific clusters based on labels. Red
Hat OpenShift Data Foundation supports performance at scale for data-intensive workloads.
Red Hat OpenShift is a strong leader in the cloud landscape of Kubernetes platforms, and is
chosen for its strengths in enterprise environments, multi-environment consistency, and
developer-centric features.
Figure 7-2 shows a basic cluster architecture. Although a cluster can technically be created
with one master node and two worker nodes, a best practice is to use at least three master
nodes, which can share functions and provide failover, and three or more worker nodes to
provide failover and scalability.
The major services that are running in the control plane are as follows:
Apiserver Acts as the front end for Kubernetes. The application programming
interface (API) server is the component that clients and external tools
interact with.
etcd A highly available key-value store that is used as Kubernetes' backing
store for all cluster data. It maintains the state of the cluster.
Workload resources
The control plane is in charge of setting up and managing the worker nodes that are running
the application code. Workload components can be described as follows:
Deployments A deployment specifies a state for a group of pods. You describe a
state in a deployment, and the Deployment Controller changes the
actual state to the wanted state at a controlled rate. You can define
deployments to create ReplicaSets or to remove existing deployments
and adopt all their resources into new deployments.
ReplicaSet A ReplicaSet maintains a stable set of replica pods running at any
given time. As such, it is often used to help ensure the availability of a
specified number of identical pods. This approach maintains a stable
set of replica pods running at any given time.
StatefulSets Used for applications that require persistent storage and a unique
identity for each pod, making them ideal for databases and other
stateful applications.
DaemonSets Helps ensure that each node in the cluster runs a copy of a pod.
Useful for deploying system services that need to run on all or certain
nodes.
Networking
Networking connectivity between pods, and between pods and outside services, is managed
within a Kubernetes cluster. The following functions are maintained by the cluster:
Service An abstraction that defines a logical set of pods and a policy by which
to access them. Services enable communication between different
pods and external traffic routing into the cluster.
Ingress Manages external access to the services in a cluster, typically HTTP.
Ingress can provide load-balancing, Secure Sockets Layer (SSL)
termination, and name-based virtual hosting.
Storage
Containers are by definition ethereal, as is any data that is stored in the container. To enable
persistent storage, Kubernetes uses the following concepts:
Persistent volumes Persistent volumes are resources in the cluster
that can be connected to containers to provide
persistent storage.
Persistent volume claims (PVCs) PVCs are requests for storage by users. These
requests are satisfied by allocating persistent
volumes.
Security
Role-based access control (RBAC) controls authorization, which determines what operations
a user can perform on cluster resources. RBAC is crucial for maintaining the security of the
cluster.
Here is a detailed look at how Red Hat OpenShift builds on the core Kubernetes architecture:
Enhanced developer productivity:
– Red Hat OpenShift includes a sophisticated web-based console that provides a
simpler interface than the standard Kubernetes dashboard. This console enables
developers to manage their projects, visualize the state of their applications, and
access a broad range of development tools directly.
– Code-ready containers simplify the setup of local Red Hat OpenShift clusters for
development purposes, providing a minimal, preconfigured environment that can run
on a developer s workstation. They are useful for simplifying the “getting started”
experience.
– The S2I tool is a powerful feature for building reproducible container images from
source code. This tool automates the process of downloading code, injecting it into a
container image, and assembling an image. The new image incorporates runtime
artifacts that are necessary to run the code, which streamlines the workflow from
source code to deployed application.
Advanced security features:
– Red Hat OpenShift enhances Kubernetes security by implementing Security Context
Constraints (SCCs). SCCs are like Pod Security Policies but provide more granular
security controls over the deployment of pods. They enable administrators to define a
set of conditions that a pod must run with to be accepted into the system, such as
forbidding running containers as root.
– Red Hat OpenShift integrates an OAuth server that can connect to external identity
providers, which enables a streamlined authentication and authorization process. This
integration enables users to log in to Red Hat OpenShift by using their corporate
credentials, simplifying access management and enhancing security.
Developer productivity
Red Hat OpenShift is designed to enhance developer productivity by streamlining processes
and reducing the complexities that are typically associated with deploying and managing
applications. Here is a detailed look at how Red Hat OpenShift achieves this goal through its
key features:
Developer-focused user interface:
– The Red Hat OpenShift Console is a powerful interface that provides developers with
an overview of all projects and resources within the cluster. It offers a perspective that
is tailored to developers' needs, enabling them to create, configure, and manage
applications directly from the browser. Features like the Topology view enable
developers to visualize their applications and services in a GUI, making it simpler to
understand and manage the relationships between components.
– Red Hat OpenShift includes a Developer Catalog that offers many build and deploy
solutions, such as databases, middleware, and frameworks, which can be deployed on
the cluster with a few clicks. This self-service portal accelerates the setup process for
developers, which enables them to focus more on coding and less on configuration.
By focusing on these aspects of developer productivity, Red Hat OpenShift lowers the barrier to
entry for deploying applications in a Kubernetes environment, simplifies the management of
these applications, and accelerates the development cycle. This approach enables developers to
spend more time coding and less time dealing with deployment complexities, leading to faster
innovation and deployment cycles in a cloud-native landscape.
By providing these comprehensive security features, Red Hat OpenShift addresses the
complex security challenges that are faced by enterprises today, which helps ensure that their
deployments are secure by design, compliant with industry standards, and capable of
withstanding modern cybersecurity threats. This security-first approach is integral to
maintaining trust and integrity in enterprise applications and data.
The major security challenges that are linked with distributed environments are as follows:
Complexity and visibility Microservices challenge security due to the distributed nature
of its building blocks. Because containers are independent and
built on various frameworks (such as different languages
libraries), the security challenges require a preventive strategy
to monitor containers.
Communication Microservices communicate with each other through APIs,
increasing the attack surface. Encryption of data and
authentication are measures that must address the issue
effectively.
Access control Access must be monitored granularly, and specific policies are
required to help ensure a balance between a smooth
development workflow and a highly secure environment.
Beginning with the operating system layer, this section then explores the Compute layer,
specifically focusing on the IBM Power server to emphasize the security features that are
integrated into its hardware design. Before delving into a more detailed description, we also
introduce the Network and Storage layers, highlighting how the Red Hat OpenShift platform
provides strategies to address challenges.
Red Hat OpenShift Container Platform leverages Red Hat CoreOS, a container-oriented
operating system that implements the Security Enhanced Linux (SELinux) kernel to achieve
container isolation and supports access control policies. CoreOS includes the following
features:
Ignition: A boot system configuration that is responsible for starting and configuring
machines.
CRI-O: A container run time integrating with the OS. It is responsible for running, stopping,
and restarting containers (it replaces the Docker Container Engine).
Kubelet: A node agent that is responsible for monitoring containers.
An extra security measure is implemented by namespaces, which enable you to abstract the
resources that are consumed, including the OS, so that the running container appears as
though it is running its own OS to limit the attack surface and prevent vulnerabilities that
contaminate other containers. Compromised containers are a vector for the host OS and for
other containers that are not running SELinux, so with the control groups, the administrator can
set a limitation on the resources that a collection of containers can consume from the host.
Secure computing profiles can be defined to limit the system calls that are available to a
collection of containers.
Ultimately, SELinux isolates namespaces, control groups, and secure computing nodes.
A best practice to secure a multi-tenant environment is to design a container with the least
privileges (see 7.4.1, “Privileges” on page 218) possible. As described in 7.3, “Securing your
container environment” on page 213, the administrator can (and should) apply a MAC for every
user and application while making sure that control groups limit the resources that containers
may consume from the host.
IBM Power10 has in-core hardware that protects against return-oriented programming (ROP)
cyberattacks, with limited impact (1 - 2%). ROP attacks are difficult to identify and contain
because they collect and reuse existing code from memory (also known as “gadgets”) rather
than injecting new code into the system. Hackers chain the commands in memory to perform
malicious actions.
IBM Power10 isolates the Baseboard Management Controller (BMC), which is the
micro-controller that is embedded on the system board that is responsible for controlling
remote management capabilities, and implements allowlist and blocklist approaches to limit
the CPU resources that the BMC can access.
For more information, see 1.4, “Architecture and implementation layers” on page 10.
Red Hat OpenShift comes with Red Hat Single Sign-On, which acts as an API authentication
and authorization measure to secure platform endpoints.
Kubernetes clusters are composed of at least one master node (preferably more for
redundancy purposes) and multiple worker nodes, which are virtual machines (VMs) or
physical machines that the containers run on. Each node has an IP address, and
containerized applications are deployed on these nodes as pods. Each pod is identified with a
unique IP address, which helps network management ease because the pod can be treated
as a physical host or VM in terms of port allocation, naming, and load-balancing.
Red Hat SDN uses Open vSwitch to manage network traffic and resources as software,
enabling policy-based management. SDN controllers satisfy application requests by
managing networking devices and routing the data packages to their destination.
Leveraging Single Root I/O Virtualization (SR-IOV) on IBM Power servers, the network design
becomes more flexible.
Before moving to storage, another functional aspect of Red Hat OpenShift is Network File
System (NFS), which is the method that is used to share files across clusters over the
network. Although NFS is an excellent solution for many environments, understanding the
workload requirements of an application is important when selecting NFS-based storage
solutions.
When a container is created, there is a transient layer that handles all read/write data.
However, when the container stops running, this ephemeral layer is lost. According to the
nature of the container, administrators decide to assign either volumes (bound to the lifetime
of the pod) or persistent volumes (persisting longer than the lifetime of the pod).
The dynamic provisioning requirements that benefit from a microservices architecture are
facilitated by the Container Storage Interface (CSI), which enables a vendor-neutral
management of file and block storage. Using a CSI API enables the following functions:
Provision or deprovision a determined volume.
Attach or detach a volume from a node.
Mount or unmount a volume from a node.
Consume block and mountable volumes.
Create or delete a snapshot.
Provision a volume from a snapshot.
With the Red Hat OpenShift Platform Plus plan, the enterprise can leverage Red Hat Data
Foundation, which is a software-defined storage orchestration platform for container
environments. The data fabric capabilities of the Red Hat OpenShift Data Platform are
derived from the combination of Red Hat Ceph (a software-defined storage platform), Rook.io
(a storage operator), and NooBaa (a storage gateway). Red Hat OpenShift Data Platform can
be deployed as an internal storage cluster or external storage cluster. It uses CSI to serve
storage to the Red Hat OpenShift Container Platform pods. With these capabilities, you can
manage block, file, and object storage to serve databases, CI/CD tools, and S3 API endpoints
to the nodes.
With the contextual framework of the storage layer in Red Hat OpenShift clarified, here are
the security measures that Red Hat Ceph enforces to address threat and vulnerability
management, encryption, and identity and access management (IAM):
Maintaining upstream relationships and community involvement to help focus on security
from the start.
Selecting and configuring packages based on their security and performance track
records.
Building binary files from associated source code (instead of accepting upstream builds).
Source trusting
When pulling code from a GitHub repository, consider whether you can trust the third-party
developer. Developers might overlook the vulnerabilities of libraries or other dependencies
that are used in the code, so conduct due diligence before deploying a container in your
enterprise environment.
To mitigate this risk, Red Hat provides Quay, which is a security focused container image
registry that is included in Red Hat OpenShift Platform Plus.
If you prefer to scan for vulnerabilities with different tools, you can integrate Red Hat
OpenShift scanners such as OpenSCAP, BlackDuck Hub, JFrog Xray, and Twislock.
Deployments on a cluster
As a best practice, leverage automated policy-based tools to deploy containers in production
environments. SCCs, which are packaged in Red Hat OpenShift Container Platform, help
administrators secure sensitive information by allowing/denying access to volumes,
accepting/denying privileges, and extending/limiting capabilities that a container requires.
Orchestrating securely
Red Hat OpenShift extends Kubernetes capabilities in terms of secure containers
orchestration as follows:
Handling access to the master node through TLS, which helps ensure that data over the
internet is encrypted.
Helping ensure that the apiserver access is based on X.509 certificates or OAuth access
tokens.
Avoiding exposing etcd (an open source key-value store database for critical data) to the
cluster.
Using SELinux.
Red Hat Single Sign-On, which is an API authentication and authorization service, features
client adapters for Red Hat JBoss, which is a Node.js and LDAP-based directory service. An
API management tool to use in this context is Red Hat 3scale API management.
To configure a firewall for Red Hat OpenShift Container Platform 4.12, define the sites that
Red Hat OpenShift Container Platform requires so that the firewall grants access to those
sites. Create an allowlist that contains the URLs that are shown in Figure 7-4. If a specific
framework requires more resources, include them now.
Figure 7-4 Allowlist for the Red Hat OpenShift Container Platform firewall
If you want to use Telemetry to monitor the health, security, and performance of application
components, use the URLs that are shown in Figure 7-5 to access Red Hat Insights.
If the environment extends to Alibaba, Amazon Web Services (AWS), GCP, or Azure to host
the cluster, grant access to the provider API and DNS for the specific cloud, as shown in
Figure 7-6 on page 215.
If the preferred option is the default Red Hat Network Time Protocol (NTP) server, use
rhel.pool.ntp.org.
For more information, see Configuring your firewall for Red Hat OpenShift Container Platform.
Figure 7-7 shows an example of a YAML definition of a secret object type and describes
some of the contents.
SCs and SCCs are required for a container to configure access to protected Linux operating
system functions on an Red Hat OpenShift Container Platform cluster. SCs are defined by the
development team, and SCCs are defined by cluster administrators. An application's SCs
specify the permissions that the application needs, and the cluster's SCCs specify the
permissions that the cluster allows. An SC with an SCC enables an application to request
access while limiting the access that the cluster grants.
By default, Red Hat OpenShift prevents the containers running in a cluster from accessing
protected functions. These functions (Linux features such as shared file systems, root access,
and some core capabilities, such as the kill command) can affect other containers running
in the same Linux kernel, so the cluster limits access to them. Most cloud-native applications
work with these limitations, but some (especially stateful workloads) need greater access.
Applications that need these functions can still use them, but they need the cluster's
permission.
The field that is designated by the number 1 in Figure 7-10 represents the users to which the
SCC “restricted” applies to. Field 2 identifies the group to which the SCC “restricted” applies.
It is a best practice to avoid modifying default SCCs, although administrators can create
customized SCCs that better fit specific requirements and policies in the organizational
processes. An example of how a new SCC can be created is described in 7.4.2, “Access
controls” on page 219 and in Example 7-3 on page 219.
The following sections describe protected Linux functions, such as privileges, access
controls, and capabilities.
7.4.1 Privileges
Privileges describe the authority of a pod and the containerized applications running within it.
There are two places that privileges can be assigned: either in the SC when the privilege is
set equal to true in the SC request, or set in the SCC where the privilege is set to true, as
shown in Example 7-1.
In Example 7-1, the first line indicates that the container runs with specified privileges, and
the second line grants the possibility for a pod that is derived by the parent pod to run with
more privileges than the parent pod.
It is a best practice to remember that privileged pods might endanger the host and other
containers, so only well-trusted processes should be allowed privileges.
Here is the correct syntax for the development team to use to include these requests:
securityContext.field
Once the request is made, it is processed and validated against the cluster SCCs.
Example 7-3 shows how a new SCC looks by integrating the fields that are listed in 7.4.2,
“Access controls” on page 219.
7.4.3 Capabilities
Some capabilities, specifically Linux OS capabilities, take precedence over the pod’s settings.
A list of these capabilities can be found in this document. For completeness, Example 7-4
shows some of the most popular ones.
In Figure 7-11 on page 221, the SC fails to pass due to three critical issues, which are shown
as points 1, 2, and 4.
In an attempt to control the pod storage volumes, the SC requests fsGroup 5555. The
reason that this action fails is that SCC restricted does not specify a range for fsGroup,
so the default range (1000000000 - 1000009999) is used, which excludes fsGroup 5555.
(1)
The SC requests runAsUser 1234. However, the SCC restricted option once again uses
the default range (1000000000 - 1000009999), so the request fails because it not within
the range. (2)
The deployment manifest requests SYS_TIME (it manipulates the system clock). This
request fails because the SCC does not specify SYS_TIME either in allowedCapabilities
or defaultAddCapabilities (4). The only request that passes is (3). The SC requests
runAsGroup 5678, and which is allowed by the runAsAny field of the restricted SCC.
As a final remark, (5) is a note to highlight that the container is assigned to the project
default context value because seLinuxContext is set as MustRunAs, but lacks the specific
context.
Figure 7-12 shows how the SC request can be satisfied against a customized SCC
(my-custom-scc).
This section delves into three primary areas: monitoring containers and Red Hat OpenShift
Container Storage security, audit logs, and Red Hat OpenShift File Integrity Operator
monitoring.
The Red Hat OpenShift Container Monitoring Platform addresses many of these monitoring
challenges through a preconfigured, automatically updated stack that is based on
Prometheus, Grafana, and Alertmanager. Key components of this platform are as follows:
Prometheus: Used as a back end to store time-series data, Prometheus is an open-source
solution for cloud-native architecture monitoring. It offers powerful querying capabilities
and a flexible data model, making it suitable for a wide range of monitoring scenarios.
Alertmanager: Handles alarms and sends notifications. It integrates seamlessly with
Prometheus, enabling sophisticated alerting rules and notification mechanisms.
Alertmanager supports multiple notification channels, including email, Slack, and
PagerDuty, which helps ensure that alerts reach the correct people at the correct time.
Grafana: Provides visual data representation through graphs. Grafana's rich visualization
capabilities enable users to create dynamic and interactive dashboards, making it simpler
to interpret monitoring data and identify trends and anomalies.
The platform includes default alerts that notify administrators immediately about cluster issues.
Default dashboards in the Red Hat OpenShift Container Platform web console offer visual
representations of cluster metrics, which help with a quick understanding of cluster states. The
“Observe” section of the web console enables access to metrics, alerts, monitoring
dashboards, and metrics targets. Cluster administrators can optionally enable monitoring for
user-defined projects, enabling customized monitoring of services and pods. This flexibility
helps ensure that different teams and projects can tailor monitoring to their specific needs.
IBM Instana enhances the observability and APM functions that are provided by the default
Red Hat OpenShift Container monitoring tools. Instana is an automated system and APM
service that visualizes performance through machine learning-generated graphs. It increases
application performance and reliability through deep observability and applied intelligence.
Instana excels in cloud-based microservices architectures, enabling development teams to
iterate quickly and address issues before they impact customers.
By integrating IBM Instana with Red Hat OpenShift, organizations can elevate their
monitoring and observability capabilities, which help ensure that their cloud-native
applications remain performant, resilient, and reliable.
Audit logs provide a detailed record of all activities and changes within the system. They are
crucial for tracking user actions, detecting unauthorized access, and investigating security
incidents. Effective audit logging helps in maintaining compliance with regulatory
requirements and provides an audit trail that can be used for forensic analysis.
The Red Hat OpenShift File Integrity Operator enhances security by monitoring file integrity
within the cluster. It detects unauthorized changes to critical system files, which help ensure
that the integrity of the operating environment is maintained. The File Integrity Operator works
by periodically checking the hashes of monitored files and comparing them to known good
values. Any discrepancies trigger alerts, enabling administrators to investigate and remediate
potential security breaches.
The authentication process in Red Hat OpenShift Container Platform involves multiple layers to
help ensure secure access to its resources. Users authenticate primarily through OAuth access
tokens or X.509 client certificates. OAuth tokens are obtained through the platform's built-in
OAuth server, which supports authentication flows such as Authorization Code Flow and
Implicit Flow. The server integrates seamlessly with various identity providers, including LDAP,
Keystone, GitHub, and Google, enabling organizations to leverage existing user management
systems securely.
In Red Hat OpenShift, users are classified into different categories based on their roles and
responsibilities within the platform. Regular users are typically individuals who interact directly
with applications and services that are deployed on Red Hat OpenShift. System users are
automatically generated during the platform's setup and associated with specific system-level
tasks, such as managing cluster nodes or running infrastructure-related operations.
Service accounts represent a specialized type of system user that is tailored for project-specific
roles and permissions. These accounts enable automated processes within projects, which
help ensure that applications and services can securely access resources without
compromising system integrity.
Groups play a pivotal role in managing authorization policies across Red Hat OpenShift
environments. Users can be organized into groups, facilitating streamlined assignment of
permissions and simplifying the enforcement of access control policies. Alongside user-defined
groups, Red Hat OpenShift automatically provisions virtual groups, which include
system-defined roles and default access configurations. This hierarchical group structure helps
ensure efficient management of user permissions while adhering to organizational security
policies and compliance requirements.
The internal OAuth server in Red Hat OpenShift acts as a central authority for managing
authentication and authorization workflows. It issues and validates OAuth tokens that are used
by clients to authenticate API requests, which help ensure that only authorized users and
applications can access protected resources. Administrators can configure the OAuth server to
integrate seamlessly with various identity providers, including htpasswd, Keystone, LDAP, and
external OAuth providers like GitHub or Google. Each identity provider offers distinct
authentication mechanisms, such as simple bind authentication for LDAP or OAuth 2.0 flows for
external identity providers, enhancing flexibility and compatibility with diverse organizational
environments.
RBAC is fundamental to enforcing granular access control policies within Red Hat OpenShift,
which enable administrators to define fine-grained permissions through roles and role bindings.
Roles specify a set of permissions (verbs) that dictate actions users can perform on specific API
resources (objects). Role bindings associate these roles with individual users, groups, or
service accounts, enabling administrators to implement the principle of least privilege
effectively.
ClusterRoles extend RBAC capabilities by providing cluster-wide permissions that apply to all
users within the platform. ClusterRoleBindings establish associations between ClusterRoles
and subjects (users or groups), which enable administrators to manage permissions
consistently across large-scale deployments.
7.7 Tools
There are multiple tools that are available to help you set up and monitor security in your
Red Hat OpenShift environment. This section describes some of them.
7.7.1 Aqua
This section describes Aqua, which is a robust security tool that safeguards workloads that
are hosted on Red Hat OpenShift running on IBM Power servers. Developed by an
IBM Business Partner, Aqua addresses the intricate security challenges that are inherent in
cloud-native environments, spanning the entire lifecycle of containerized applications.
Recognizing the trend toward hybrid and multi-cloud deployments, Aqua supports security
management across diverse infrastructure environments. It enables organizations to maintain
consistent security policies and compliance measures across on-premises data centers and
public cloud platforms, which help reduce the attack surface and mitigate risks that are
associated with complex deployment landscapes.
The solution helps protect containerized Kubernetes workloads in all major clouds and hybrid
platforms, including Red Hat OpenShift, Amazon Elastic Kubernetes Service (EKS),
Microsoft Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).
Red Hat Advanced Cluster Security for Kubernetes is included with Red Hat OpenShift
Platform Plus, which is a complete set of powerful, optimized tools to secure, protect, and
manage the applications. For more information, see Red Hat Advanced Cluster Security for
Kubernetes.
A good feature of Red Hat ACS is that it works to prevent risky workloads from being
deployed or running. Red Hat Advanced Cluster Security monitors, collects, and evaluates
system-level events such as process execution, network connections and flows, and privilege
escalation within each container in your Kubernetes environments. Combined with behavioral
baselines and “allowlisting”, it detects anomalous activity that is indicative of malicious intent
such as active malware, cryptomining, unauthorized credential access, intrusions, and lateral
movement.
For more information about the full features of Red Hat Advanced Cluster Security, see the
Red Hat ACS Data Sheet.
Chapter 8. Certifications
Security standards are a set of guidelines and best practices that organizations can follow to
protect their sensitive information and systems from cyberthreats. These standards are
developed by various organizations and agencies, such as the International Organization for
Standardization (ISO) and the National Institute of Standards and Technology (NIST).
IBM continuously works to maintain certification for industry security standards to provide
their clients with a product base that helps them build systems that are compliant to the
relevant industry standards.
Security standards are important because they provide the following items:
Consistency: Standards provide a consistent framework for implementing security
measures, making it simpler to manage and maintain security across an organization.
Compliance: Many industries have specific regulations that require adherence to certain
security standards.
Risk reduction: Following security standards helps organizations identify and mitigate
potential risks, reducing the likelihood of cyberattacks.
Enhanced reputation: Demonstrating commitment to security standards can improve an
organization's reputation and customer trust.
8.1.2 Certifications
Certifications for security standards provide third-party validation that an enterprise is
compliant with specific security standards. Certification demonstrates a commitment to robust
security practices, reducing the risk of data breaches and cyberattacks.
If you are focusing on IBM Power servers security standards, you should learn more about
IBM PCIe Cryptographic Coprocessors, which are a family of high-performance Hardware
Security Modules (HSMs). These programmable PCI Express (PCIe) cards work with Power
servers to offload computationally intensive cryptographic processes, such as secure
payments, or transactions from the host server. Using these HSMs, you gain performance
and architectural advantages and enable future growth by offloading cryptographic
processing from the host server, in addition to delivering high-speed cryptographic functions
for data encryption and digital signing, secure storage of signing keys, or custom
cryptographic applications.
IBM PCIe Cryptographic Coprocessors meet FIPS PUB 140-2, Security Requirements for
Cryptographic Modules, Overall Security Level 4, which is the highest level of certification that
is achievable. For more information, see IBM PCIe Cryptographic Coprocessor.
Each IBM HSM device offers the highest cryptographic security that is available commercially.
FIPS PUB 140-2 defines security requirements for cryptographic modules. It is issued by the
NIST and widely used as a measure of the security of HSMs. The cryptographic processes of
each IBM HSM are performed within an enclosure on the HSM that provides complete
physical security. For more information, see IBM Cryptographic HSM Highlights.
IBM AIX supports FIPS. For more information about FIPS, see IBM AIX 7.x Security Technical
Implementation Guide and AIX FIPS Crypto Module for OpenSSL FIPS 140-2
Non-Proprietary Security Policy Document Version 1.9.
If you are using Red Hat Enterprise Linux (RHEL) CoreOS machines in your Red Hat
OpenShift cluster, you can apply FIPS PUB 140-2 when the machines are deployed based on
the status of certain installation options that govern the cluster options, which you can change
during cluster deployment. With RHEL machines, you must enable FIPS mode when you
install the operating system on the machines that you plan to use as worker machines. These
configuration methods help ensure that your cluster meets the requirements of a FIPS
compliance audit, that is, only FIPS-validated or Modules In Process cryptography packages
are enabled before the initial system start.
Common Criteria (CC) (ISO 15408) is the only global, mutually recognized product security
standard. The goal of the CC is to develop confidence and trust in the security characteristics
of a system and in the processes that are used to develop and support it.
The ISO 15408 international standard is specifically for computer security certification. For
more information and a full description of the ISO 15408 standard, see ISO/IEC
15408-1:2022.
Federal IT security professionals within the DoD must comply with the STIG technical testing
and hardening frameworks. According to DISA, STIGs “are the configuration standards for
DoD [information assurance, or IA] and IA-enabled devices and system. The STIGs contain
technical guidance to ‘lock down’ information systems/software that might otherwise be
vulnerable to a malicious computer attack.”1
You can search through a publicly available document library of STIGs. Table 8-1 on
page 233 lists some specific operating systems and components and their corresponding
STIGs.
1 Source: https://disa.mil/.
CIS is best known for its CIS Controls, which is a comprehensive framework that has
20 essential safeguards and countermeasures to improve cyberdefense. These controls offer
a prioritized checklist that organizations can use to reduce their vulnerability to cyberattacks.
Also, CIS produces CIS Benchmarks, which provide best practices for secure system
configurations, referencing these controls to guide organizations in building stronger security
measures.
Each CIS Benchmark offers configuration recommendations that are organized into two
profile levels: Level 1 and Level 2.
Level 1 profiles provide base-level configurations that are simpler to implement with
minimal impact on business operations.
Level 2 profiles are for high-security environments, requiring more detailed planning and
coordination to implement while minimizing business disruption.
At the time of writing, there are more than 100 CIS Benchmarks that are available as
downloadable, no-charge PDFs for non-commercial use.
Here are some CIS Benchmarks for that are relevant to IBM Power servers:
CIS Benchmark for IBM AIX
This benchmark provides security configuration guidelines for AIX, which is commonly
used on IBM Power servers. It includes best practices for system configuration to enhance
security and reduce vulnerabilities.
CIS Benchmark for IBM i
This benchmark offers best practices for securely configuring IBM i. It focuses on system
settings, security policies, and configurations to improve the overall security posture.
CIS Benchmarks for Linux
For IBM Power servers running Linux, there is a generic Linux benchmark, and
benchmarks for RHEL, SUSE Enterprise Linux, and Ubuntu Linux.
These benchmarks are regularly updated to reflect the latest security practices and
vulnerabilities. You can find the most recent versions and more information on the CIS
website or through their publications and resources. Table 8-2 provides a more
comprehensive list.
PowerSC is on top of the IBM Power server stack, integrating security features that are built at
different layers. You can now centrally manage security and compliance on Power servers for
all IBM AIX and Linux on Power endpoints. In this way, you can get better support for
compliance audits, including the General Data Protection Regulation (GDPR).
PowerSC helps to automate the configuration and monitoring of systems that must be
compliant with the PCI DSS. Therefore, the PowerSC Security and Compliance Automation
feature is an accurate and complete method of security configuration automation that is used
to meet the IT compliance requirements of the DoD UNIX STIG, the PCI DSS, the
SOX/COBIT, and HIPAA.
The PowerSC Security and Compliance Automation feature creates and updates ready XML
profiles that are used by IBM Compliance Expert Express (ICEE) edition. You can use the
PowerSC XML profiles with the pscxpert command.
The preconfigured compliance profiles that are delivered with PowerSC reduce the
administrative workload of interpreting compliance documentation and implementing the
standards as specific system configuration parameters. This technology reduces the cost of
compliance configuration and auditing by automating the processes. IBM PowerSC is
designed to help effectively manage the system requirements that are associated with
external standard compliance, which can potentially reduce costs and improve compliance.
FIM monitors critical files on a system that contain sensitive data, such as configuration
details and user information. From a security perspective, it is important to monitor changes
that are made to these sensitive files. FIM can also monitor changes to binary files and
libraries.
PowerSC can generate real-time alerts whenever the contents of a monitored file change or
and when a file’s characteristics are modified. By using the Autonomic Health Advisor File
System (AHAFS) event monitoring technology, PowerSC RTC monitors all changes and
generates alerts by using the following methods:
Sends email alerts.
Logs a message to a file.
Sends an SNMP message to your monitoring server.
Sens an alert to the PowerSC GUI server.
One of the EDR forms in PowerSC is that you can configure intrusion detection and
prevention systems (IDPS) for a specific endpoint. For AIX, you can use the PowerSC GUI to
use the Internet Protocol Security (IPsec) facility of AIX to define parameters for intrusion
detection. The IPsec facility of AIX must be installed on the AIX endpoint. For Red Hat
Enterprise Linux (RHEL) and SUSE Linux Enterprise Server, you must install the psad
package on each endpoint on which you want to run psad, as described in Installing PowerSC
on Linux systems, before you can use it with the PowerSC GUI.
The PowerSC GUI uiAgent monitors the endpoint for port scan attacks on the ports that are
listed in the IPsec filter rules. By default, PowerSC creates an IPv4 rule in
/etc/idp/filter.rules to monitor operating system network ports. PowerSC also creates
the /var/adm/ipsec.log log file. The IPsec facility of AIX also parses IPv6 rules in
/etc/idp/filter.rules, and the IPv6 addresses appear in the event list.
IBM PowerSC can integrate with the Comprehensive Malware Detection and Removal
(ClamAV) global antivirus software toolkit to help prevent malware attacks and detect trojans,
viruses, malware, and other malicious threats by scanning all incoming data to prevent
malware from being installed and infecting the server.
Through the PowerSC server UI, you can configure anti-malware settings for specific
endpoints. Then, ClamAV moves or copies any detected malware to the quarantine directory
on the PowerSC uiAgent, assigning a time-stamped prefix and nullifying file permissions to
prevent access. ClamAV is not included in the initial PowerSC package, so you must install it
on the uiAgent before you can use it with the PowerSC GUI.
For more information about a generic PowerSC ClamAV detailed configuration and features,
see Configuring anti-malware.
IBM PowerSC can deploy multi-factor authentication (MFA) for mitigating the risk of a data
breach that is caused by compromised credentials. PowerSC Multi-Factor Authentication
provides numerous flexible options for implementing MFA on Power. PowerSC Multi-Factor
Authentication is implemented with a Pluggable Authentication Module (PAM), and can be
used on AIX, Virtual I/O Server (VIOS), RHEL, SUSE Linux Enterprise Server, IBM i,
Hardware Management Console (HMC), and PowerSC Graphical User Interface server.
The National Institute of Standards and Technology (NIST) defines MFA as authentication
that uses two or more factors to achieve authentication. Factors include “something that you
know”, such as a password or personal identification number (PIN); “something that you
have”, such as a cryptographic identification device or a token; or “something that you are”,
such as a biometric.
IBM PowerSC authentication factors improve the security of user accounts. The user either
provides the credentials directly in the application (in-band) or out-of-band.
For in-band authentication, users can generate a token to satisfy a policy and use that token
to directly log in. Out-of-band authentication enables users to authenticate on a user-specific
web page with one or more authentication methods to retrieve a cache token credential (CTC)
that they then use to log in. For more information, see Out-of-band authentication type.
IBM PowerSC MFA server can be installed on AIX, IBM i, or Linux. For more information
about installation procedures, see the following resources:
Installing IBM PowerSC MFA server on AIX
Installing and configuring IBM PowerSC MFA server on IBM PASE for i
Installing an IBM PowerSC MFA server on Linux
IBM PowerSC Multi-Factor Authentication Version 2.2.0 User's Guide
IBM PowerSC Multi-Factor Authentication Version 2.2.0 Installation and Configuration
Before you configure IBM PowerSC MFA for high availability (HA), meet the following
prerequisites:
The primary and secondary server must use the same operating system.
Updates to any files in /opt/IBM/powersc/MFA/mfadb are not preserved if you reinstall the
IBM PowerSC MFA server.
If the secondary server uses RHEL or SUSE Linux Enterprise Server, install PostgreSQL,
openCryptoki, and opencryptoki-swtok on the secondary server.
For more information, see Configuring IBM PowerSC MFA for high availability.
With the introduction of IBM Power Virtual Server and its ability to run AIX, IBM i, and Linux
on Power in the cloud, understanding Power Virtual Server security is crucial for establishing
a reliable and secure environment.
This chapter gives a high-level overview of security in Power Virtual Server. For more
information, see 10.7, “Additional references” on page 247.
You can use the service access roles to define the actions that the users can perform on
Power Virtual Server resources. Table 10-2 shows the IAM service access roles and the
corresponding actions that a user can complete by using the Power Virtual Server.
When you assign access to the Power Virtual Server service, you can set the access scope
to:
All resources
Specific resources, which support the following selections:
– Resource group
– Service instance
Editor, Manager, Operator, Reader, and Viewer Power Virtual Server service
Editor, Manager, Operator, Reader, Viewer, and Virtual IBM Virtual Private Cloud
Private Network (VPN) Client Infrastructure Services service
Although learning from real-world incidents is valuable, proactive measures are crucial to
prevent costly breaches. Security experts emphasize the importance of cultivating a
security-conscious workforce through targeted training and awareness campaigns. By
fostering a culture where security is a shared responsibility, organizations can reduce their
risk exposure.
By following these best practices, organizations can reduce the financial and reputational
impact of a data breach.
IBM X-Force published the IBM X-Force Threat Intelligence Index 2024. Here is a summary of
the findings:
Identity-centric attacks: Cybercriminals increasingly target identities as the easiest point of
entry, with a rise in credential theft and abuse.
Ransomware decline, data theft surge: While ransomware attacks decreased, data theft
and leaks became the primary motivation for cyberattacks.
Infostealer malware growth: The use of infostealer malware to steal credentials rose
tremendously, fueling the dark web's stolen credential market.
Overall, the report highlights a shift in cybercrime tactics toward identity-based attacks and
data theft while also warning of the growing threat that is posed by AI. Organizations must
prioritize identity protection, implement strong security measures, and stay vigilant against
evolving threats.
11.1.4 Summary
The importance of fixing the basics is key. Security is built from steps such as asset inventory,
patching, and training. Here are some important points to consider:
Develop an automated methodology for secure assessments and detection.
Establish a risk management framework that includes cyberinsurance.
Maintain a dedicated environment for testing security patches.
Ensure that rollback options are available in all scenarios.
This section describes some of the basics of locking down your logical partitions (LPARs).
This lockdown is not done by default, but it is fairly simple to do. It includes default
permissions and umasks; good usernames and passwords; logging, patching, and removing
insecure daemons; and integrating Lightweight Directory Access Protocol (LDAP) or Active
Directory (AD).
Chapter 11. Lessons learned and future directions in IBM Power security 251
11.2.1 Usernames and passwords
A username and password combination is one of the most basic protections. To use longer
usernames and passwords, make a system change. Changing the username length is
required if you want to integrate with LDAP or AD, and it requires a restart. To increase the
maximum username length to 36, run the following command:
chdev -l sys0 -a max_logname=36
This change requires a restart of the LPAR. To have longer passwords, use the chsec
command. The following version configures the system to use ssha256 (up to 255 characters)
for passwords. The next time local users change their password, they will get a longer and
more secure password.
chsec -f /etc/security/login.cfg -s usw "pwd_algorithm=ssha256"
As a best practice, set the system to automatically create home directories, which are
important in an LDAP or AD environment. Example 11-1 shows how to accomplish this task.
11.2.2 Logging
Logging is a critical part of any system-protection strategy. Without logs, it is impossible to
know what has been happening on the system. The syslog daemon (syslogd) starts by
default on AIX, but the log configuration file is not set up to log everything; you must correctly
set up /etc/syslog.conf. It is a best practice to set up a separate file system for logs (such as
/usr/local/logs) rather than use the default of /var/spool because if /var fills up, the
system crashes; if your separate file system fills up, it stops logging. Although file systems
should be monitored, it is a best practice to store logs in their own file system to protect
against large logs bringing down the system.
Logs can be written to a file, sent to the console, logged to a central host across the network
(but the traffic can be substantial), emailed to an administrator, sent to all logged-in users, or
any combination of these methods. The most commonly used method is writing to a file in a
file system. Once the file system is set up, create a /etc/syslog.conf file.
Example 11-2 on page 253 shows an example file that writes to a local file system. It keeps
the logs to no more than 2 MB, and then rotates and compresses them. It keeps the last 10
logs. Do this task on all LPARs and Virtual I/O Servers (VIOSs).
Go into /usr/local/logs and create each of the files that are shown in Example 11-2 by
running touch. Now, you can stop (stopsrc -s syslogd) and start (startsrc -s syslogd) the
logging daemon.
The file is only four lines and everything is commented out. On a NIM server, you see tftp
and bootp uncommented. Occasionally, when you do maintenance. it uncomments or adds
services. When the file is only 4 lines, you can see immediately what was uncommented or
added.
As a best practice, do not use ftp and telnet because they are insecure; use ssh and sftp
instead. If you must use telnet or ftp, then you can uncomment them, but they send
passwords and other items in clear text.
As a best practice, look at /etc/rc.tcpip to see whether snmp, sendmail, and other daemons
are starting. If you need snmp or sendmail, then configure them to keep hackers from
exploiting them.
Chapter 11. Lessons learned and future directions in IBM Power security 253
11.2.5 Patching
At a minimum, make sure that you are running a fully supported version of the OS (VIOS, AIX,
IBM i, or Linux). To check, use the Fix Level Recommendation Tool (FLRT). As a best
practice, keep your patching up to date to proactively solve problems.
In AIX and VIOS environments, there are two different kinds of patching:
Fix packs (Technology Levels and Service Packs (SPs))
Emergency fixes or interim fixes
Fix packs are installed by using install. Emergency fixes and interim fixes are installed by
using emgr.
Technology Levels and SPs are found at IBM Fix Central. Check there regularly for updates to
your LPARs, VIOSs, server and I/O firmware, and Hardware Management Consoles (HMCs).
Also, there are products that are installed (even at the latest SP) that need updating, such as
Java, OpenSSH, and OpenSSL. Java patches are downloaded at IBM Fix Central. OpenSSH
and OpenSSL are downloaded at the Web Applications website. As a best practice, perform a
full patching window every 6 months unless there is an emergency. You can use the FLRT
and Fix Level Recommendation Tool Vulnerability Checker (FLRTVC) tools to determine what
patching must occur.
As a best practice, first update the HMC, then the server firmware, then the I/O firmware and
VIOS servers, and then the LPARs. However, you should look at the readme file and
description files for every update to make sure that IBM does not have prerequisites that must
be followed. There are also some requirements for IBM Power9 and adapter firmware
because of the new Trusted Boot settings.
You can write scripts that grep certain things in the output and email them to yourself.
You can run flrtvc ahead of time and then download and prestage the updates. flrtvc
typically identifies emergency fixes and interim fixes that you need, and the Java, OpenSSH,
OpenSSL, and other updates that must go on the system.
As a best practice, wait until firmware, Technology Levels or SPs have been out for at least
1 -2 months before you update them. Then, update your NIM server and migrate the updates
to test, dev, QA, and production.
Do not put root or other system accounts under the control of AD or LDAP. Those accounts
must be local. Restrict those accounts to console access only.
Chapter 11. Lessons learned and future directions in IBM Power security 255
11.2.8 Enhanced access
If you have admins or other users who need enhanced access, provide it by using sudo or
another tool. If multiple users are logging in as root, then there is no accountability. Using
sudo causes everything to be logged.
To do this task, you can go to the IBM Linux toolbox, download yum.sh, and run it. This
process installs rpm and yum (requires ftp access to IBM repositories). When yum is installed
use yum to install sudo or other tools. Then, you can use visudo to put together the rules. You
can set a user as root with or without a password, and you can also restrict them to using only
certain commands as root. This access is useful for level 1 support and DBAs who need
privileges to perform certain tasks.
11.2.9 Backups
As a best practice, take regular mksysb (OS bootable) backups. Make sure that these bare
metal mksysb backups are part of any backup and disaster recovery (DR) plan. An mksysb
should be taken at least monthly, and before and after any system maintenance. Also, as a
best practice, have two disks (even on the Storage Area Network (SAN)) on the system that is
reserved for rootvg. One is active, and the other is one is used to take an alt_disk_copy
backup of rootvg before you make changes.
11.2.11 References
For more information about setting up your AIX security, see the following resources:
FLRT home page
FLRTVC home page
The apar.csv file
FLRTVC Online Tool
IBM Fix Central (fixes and updates)
FLRT Lite (check firmware and supported software levels)
Web Applications (OpenSSH, LDAP, OpenSSL, and Kerberos)
AIX Linux Toolbox
Figure 11-1 The IBM Fix Level Recommendation Tool for IBM Power
Note: For more information, see the Fix Level Recommendation Tool for IBM Power.
Protecting hardware, data, and backup systems from damage or theft is paramount. A robust
physical security framework is essential for any organization, serving as the bedrock on which
other security measures are built. Without it, securing information, software, user access, and
networks becomes more challenging.
Beyond internal systems, physical security encompasses protecting facilities and equipment
from external threats. Building structures, such as fences, gates, and doors, form the initial
defense against unauthorized access. A comprehensive approach considers both internal
and external factors to create a secure environment.
Effective physical security is essential for protecting facilities, assets, and personnel. A
comprehensive strategy involves a layered approach that combines various security
measures to deter, detect, delay, and respond to potential threats.
Chapter 11. Lessons learned and future directions in IBM Power security 257
Deterrence
Discourage unauthorized access through visible security measures:
Clear signage indicating surveillance
Robust physical barriers like fences and gates
High-quality security cameras
Controlled access systems (card readers, and keypads)
Detection
Identify potential threats early by using the following items:
Motion sensors and alarms
Advanced video analytics
Environmental sensors (temperature and humidity)
Real-time monitoring systems
Delay
Hinder intruders and buy time for response by using the following items:
Multiple points of entry and exit
Sturdy doors, locks, and window reinforcements
Access control measures (biometrics and mobile credentials)
Security personnel or guards
Response
Swiftly address security incidents by using the following items:
Emergency response plans and procedures
Integration of security systems with communication tools
Trained personnel for incident management
Collaboration with law enforcement
Perimeter security forms the initial line of defense for any facility. Physical barriers like fences,
gates, and surveillance systems create a deterrent against unauthorized access. Strategic
landscaping and lighting can further enhance perimeter protection by improving visibility and
restricting movement.
Access control
Granting authorized access while preventing unauthorized entry is crucial. Modern access
control systems, such as key cards, biometric readers, and mobile credentials, offer
convenience and security. Restricted areas demand stricter controls, often incorporating MFA
and surveillance.
Even the most sophisticated security systems are only as effective as the people who use
them. Employees who understand their role in security can enhance a facility's protection. By
equipping your staff with the knowledge and skills to handle emergencies, you create a safer
environment for everyone.
Chapter 11. Lessons learned and future directions in IBM Power security 259
260 IBM Power Security Catalog
A
They use methodologies, practices, and patterns to help partners develop complex solutions,
achieve better business outcomes, and drive client adoption of IBM software, servers, and
storage.
This appendix describes security offerings from IBM Technology Expert Labs. For more
information about Technology Expert Labs broader offerings, see the Technology Expert Labs
website.
By engaging IBM Technology Expert Labs, they can help secure your IBM Power environment
by assessing your setup. The purpose of this activity is to help you assess system security on
IBM Power. It provides a comprehensive security analysis of either a single AIX, IBM i, or
Linux instance, or a single Red Hat OpenShift cluster.
This service can help you address issues that affect IT compliance and governance
standards.
Assessing IBM Power Security for AIX, Linux, or Red Hat OpenShift
The goal of this service is to help a client assess system security on IBM Power by providing
a thorough security analysis of an AIX instance, Linux instance, or Red Hat OpenShift cluster.
This service is aimed at helping the client address issues that are related to IT compliance
and governance standards.
Other offerings
These security assessment offerings, and a wide range of offerings that cover areas such as
performance and availability, are provided by IBM Technology Expert Labs and are generally
available worldwide.
Here is the list of IBM Power offerings that are available at the time of writing:
Assess IBM Power System Health
Assess IBM Power Availability
Assess IBM Power Performance
Assess IBM Power Database Performance
Assess IBM Power Security for AIX, Linux, or Red Hat OpenShift
Assess IBM Power Security for IBM i
Assess IBM PowerVM Health
Assess IBM Power Capacity
Assess Oracle Licensing
Assess IBM i Performance
Assess Db2 Mirror for IBM i
Plan Migration to IBM Power10
Plan Oracle Exadata Migration to IBM Power
Install and Configure Linux on IBM Power
Install and Configure IBM License Management Tool for Software Asset Management
Install and Configure Security and Compliance Tools for IBM i
Build IBM PowerVM Recovery Manager
Build IBM PowerHA
Build IBM PowerSC
Build IBM PowerHA SystemMirror® for AIX
Build IBM PowerHA SystemMirror for IBM i
Build HA/DR Solution with PowerHA Tools for IBM i IASP Manager
Build Safeguarded Copy with IBM i
Build Cyber Vault with IBM i
Build Full System FlashCopy and Replication for IBM i
Migrate to IBM i Infrastructure
Perform IBM i Security Services
The complete list of standard services that are offered by IBM Technology Expert Labs for
IBM Power can be found at IBM Technology Expert Labs Power Offerings.
The offerings might differ in each geographical region. For more information about details that
are specific to your region, contact an IBM Technology Expert Labs representative.
The utilities range from simple to complex, and they complement the tools that are provided
natively in IBM i. Each tool has its own purchase price and is available directly from
IBM Technology Expert Labs.
Palo Alto Networks acquired the QRadar Suite SaaS offerings on Sept 4, 2024. QRadar Suite
SaaS offerings integrated into Cortex XSIAM.
With a common user interface, shared insights, and connected workflows, this solution offers
integrated products for the following areas:
Endpoint security (endpoint detection and response (EDR) and Managed Detection and
Response (MDR))
EDR solutions are important because endpoints are the most exposed and exploited part
of any network. The rise of malicious and automated cyberactivity targeting endpoints
leaves organizations struggling against attackers who exploit zero-day vulnerabilities with
a barrage of ransomware attacks.
IBM QRadar EDR provides a more holistic EDR approach:
– Remediates known and unknown endpoint threats in near real time with intelligent
automation.
– Enables informed decision-making with attack visualization storyboards.
– Automates alert management to reduce analyst fatigue and focus on threats that
matter.
– Empowers staff and helps safeguard business continuity with advanced continuous
learning AI capabilities and a simple interface.
Security Information and Event Management (SIEM)
As the cost of a data breach rises and cyberattacks become more sophisticated, the role
of security operations center (SOC) analysts is more critical than ever. IBM QRadar SIEM
has advanced AI, powerful threat intelligence, and access to the latest detection content.
IBM QRadar SIEM uses multiple layers of AI and automation to enhance alert enrichment,
threat prioritization, and incident correlation. It presents related alerts cohesively in a
unified dashboard, reducing noise and saving time. IBM QRadar SIEM helps maximize
your security team’s productivity by providing a unified experience across all SOC tools,
with integrated, advanced AI and automation capabilities.
SOAR
The IBM QRadar SOAR platform can optimize your security team’s decision-making
processes, improve your SOC efficiency, and help ensure that your incident response
processes are met with an intelligent automation and orchestration solution.
Winner of a Red Dot User Interface Design Award, IBM QRadar SOAR helps your
organization accomplish the following tasks:
– Cut response time with dynamic playbooks, customizable and automated workflows,
and recommended responses.
– Streamline incident response processes by time-stamping key actions and helping with
threat intelligence and response.
– Manage incident responses to over 200 international privacy and data breach
regulations with Breach Response.
In today’s complex threat environment, the ability to stay ahead of adversaries, design for
resilience, and create secure work environments is paramount. Trend Micro XDR services are
engineered to provide advanced threat defense through technologies and human intelligence
that proactively monitor, detect, investigate, and respond to attacks. The IBM Power
partnership helps ensure that data is protected with comprehensive end-to-end security at
every layer of the stack. These integrated security features are designed to help ensure
compliance with security regulatory requirements.
Trend Vision One delivers real-time insights to your executive dashboard. No more manual
tasks; just efficient, informed decision-making. While IBM Power frees client resources so that
they can focus on strategic business outcomes, Trend Vision One automates cybersecurity
reporting and playbooks for more efficient and productive security operations. Security teams
can stay ahead of compliance regulations, with real-time updates helping to ensure that their
enterprise security posture remains robust.
Other features
Trend Vision One includes other features:
Anti-malware
Web reputation service
Activity monitoring
Activity firewall
Application control
Behavioral analysis
Machine learning
EDR and XDR
Device control
Virtualization protection
Note: For more information about Trend Vision One, see Trend Vision One. For more
information about the Trend Vision One on IBM Power solution, see Endpoint Security
Solution - Trend Vision One.
When you transport data through APIs, you must have a protection layer to help ensure the
security of data, and limit accessibility only to known actors. Mulesoft and IBM collaborated to
produce such a layer: Anypoint Flex Gateway on IBM Power.
In today's digital landscape, seamless connectivity and rapid data exchange are crucial for
business success. Organizations constantly seek innovative solutions to streamline
operations, and MuleSoft Anypoint Flex Gateway provides that capability.
Many companies leverage Salesforce's MuleSoft to manage and secure APIs across
cloud-native, containerized environments. Now, IBM Power users can tap into the power of
Anypoint Flex Gateway's advanced API protection layer to modernize applications and
accelerate API-driven initiatives.
This synergy unlocks new levels of agility, innovation, and efficiency for your digital
transformation journey. This native integration enables a smooth installation and operation of
the API gateway, effectively safeguarding your IBM Power applications.
Deploying Anypoint Flex Gateway close to your IBM Power hosted applications, APIs, and
data enhances the customer experience, enforces security policies, reduces data latency, and
boosts application performance. You can deploy the gateway on Red Hat OpenShift, Red Hat
Enterprise Linux (RHEL), and SUSE Linux Enterprise Server.
By combining the strengths of MuleSoft's Anypoint Platform with the performance and
reliability of IBM Power servers, businesses can confidently embark on their digital
transformation journeys equipped with the tools and capabilities to drive innovation, agility,
and growth.
Note: IBM and Mulesoft’s partnership announcement for Anypoint Flex Gateway on IBM
Power can be found at IBM and MuleSoft expand global relationship to accelerate modern-
ization on IBM Power. The solution brief can be found at MuleSoft + IBM Power: Modernize
and Manage Application Connectivity.
This section describes some notable companies that are active in IBM i security.
Raz-Lee Security
Raz-Lee Security specializes in providing advanced security solutions for IBM i. Their
offerings include tools for real-time threat detection, audit and compliance management, and
vulnerability assessment. The Raz-Lee iSecurity suite is highly regarded for its powerful and
customizable security modules, which help organizations proactively manage and mitigate
security risks. Their customer base spans various sectors such as banking, insurance,
manufacturing, and government, reflecting their ability to address diverse security challenges
across different industries.
Precisely
Precisely provides a range of IBM i solutions that help ensure data integrity, availability,
security, and compliance. Their IBM i security solutions include tools for access control,
monitoring, privacy, and malware defense. Precisely is known for its robust, scalable solutions
that can integrate seamlessly into existing IT infrastructures. These solutions deliver
market-leading IBM i security capabilities that help organizations successfully comply with
cybersecurity regulations and reduce security vulnerabilities. Also, these security offerings
seamlessly integrate with Precisely IBM i high availability (HA) solutions to deliver a greater
level of business resilience. Precisely customers range from large enterprises to small and
medium businesses (SMBs) in sectors like telecommunications, financial services, and
logistics.
Precisely also offers a no-charge assessment tool for IBM i. Assure Security Risk
Assessment checks over a dozen categories of security values, compares them to best
practices, reports on findings, and makes recommendations. You can find this security risk
assessment at Assure Security.
Fresche Solutions
Fresche Solutions offers a comprehensive IBM i Security Suite to protect IBM i servers from
modern security threats. Their solutions include tools for real-time monitoring, vulnerability
assessment, and compliance management. The Fresche security suite is noted for its
innovative approach to security management, combining ease of deployment with powerful
analytical capabilities. Their customer base includes businesses of all sizes, from SMBs to
large enterprises, in industries such as retail, manufacturing, and services, demonstrating
their versatile and scalable security offerings.
HNDL Harvest Now, Decrypt Later MDR Managed Detection and Response
IPA Identity, Policy, and Audit PASE for i IBM Portable Application Solutions
Environment for i
IPL initial program load
PCI DSS Payment Card Industry Data
IPS intrusion prevention systems Security Standard
IPsec Internet Protocol Security PCIe PCI Express
IRP Incident Response Plan PEP2 Power Enterprise Pools 2
RCAC row and column access control VIOS Virtual I/O Server
SAST Static Application Security Testing vTPM virtual Trusted Platform Module
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topics in
this document. Some publications that are referenced in this list might be available in softcopy
only.
Data Resiliency Designs: A Deep Dive into IBM Storage Safeguarded Snapshots,
REDP-5737
IBM Power Systems Cloud Security Guide: Protect IT Infrastructure In All Layers,
REDP-5659
IBM Storage DS8000 Safeguarded Copy: Updated for DS8000 Release 9.3.2,
REDP-5506
Implementing, Tuning, and Optimizing Workloads with Red Hat OpenShift on IBM Power,
SG24-8537
Introduction to IBM PowerVM, SG24-8535
Security Implementation with Red Hat OpenShift on IBM Power Systems, REDP-5690
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
Cloud Management Console Cloud Connector Security white paper:
https://www.ibm.com/downloads/cas/OGGYD90Y
IBM AIX Documentation on Security:
https://www.ibm.com/docs/en/aix/7.3?topic=security
Modernizing Business for Hybrid Cloud on Red Hat OpenShift Video Series:
https://community.ibm.com/community/user/power/blogs/jenna-murillo/2024/01/29/m
odernizing-business-for-hybrid-cloud-on-openshift
Red Hat OpenShift Documentation on Configuring your Firewall:
https://docs.openshift.com/container-platform/4.12/installing/install_config/co
nfiguring-firewall.html
SG24-8568-00
ISBN 0738462012
Printed in U.S.A.
®
ibm.com/redbooks