OpenShift Container Platform 4.17 Support en US
OpenShift Container Platform 4.17 Support en US
17
Support
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides information on getting support from Red Hat for OpenShift Container
Platform. It also contains information about remote health monitoring through Telemetry and the
Insights Operator. The document also details the benefits that remote health monitoring provides.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .SUPPORT
. . . . . . . . . . .OVERVIEW
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
1.1. GET SUPPORT 6
1.2. REMOTE HEALTH MONITORING ISSUES 6
1.3. GATHER DATA ABOUT YOUR CLUSTER 6
1.4. TROUBLESHOOTING ISSUES 7
. . . . . . . . . . . 2.
CHAPTER . . MANAGING
. . . . . . . . . . . . .YOUR
. . . . . . .CLUSTER
. . . . . . . . . . RESOURCES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
2.1. INTERACTING WITH YOUR CLUSTER RESOURCES 9
.CHAPTER
. . . . . . . . . . 3.
. . GETTING
. . . . . . . . . . .SUPPORT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
3.1. GETTING SUPPORT 10
3.2. ABOUT THE RED HAT KNOWLEDGEBASE 10
3.3. SEARCHING THE RED HAT KNOWLEDGEBASE 10
3.4. SUBMITTING A SUPPORT CASE 11
3.5. ADDITIONAL RESOURCES 12
. . . . . . . . . . . 4.
CHAPTER . . .REMOTE
. . . . . . . . . HEALTH
. . . . . . . . . MONITORING
. . . . . . . . . . . . . . . WITH
. . . . . . CONNECTED
. . . . . . . . . . . . . . CLUSTERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
..............
4.1. ABOUT REMOTE HEALTH MONITORING 13
4.1.1. About Telemetry 14
4.1.1.1. Information collected by Telemetry 14
4.1.1.1.1. System information 14
4.1.1.1.2. Sizing Information 14
4.1.1.1.3. Usage information 15
4.1.2. About the Insights Operator 15
4.1.2.1. Information collected by the Insights Operator 15
4.1.3. Understanding Telemetry and Insights Operator data flow 16
4.1.4. Additional details about how remote health monitoring data is used 17
4.2. SHOWING DATA COLLECTED BY REMOTE HEALTH MONITORING 18
4.2.1. Showing data collected by Telemetry 18
4.2.2. Showing data collected by the Insights Operator 21
4.3. OPTING OUT OF REMOTE HEALTH REPORTING 22
4.3.1. Consequences of disabling remote health reporting 22
4.3.2. Modifying the global cluster pull secret to disable remote health reporting 22
4.3.3. Registering your disconnected cluster 23
4.3.4. Updating the global cluster pull secret 23
4.4. ENABLING REMOTE HEALTH REPORTING 24
4.4.1. Modifying your global cluster pull secret to enable remote health reporting 25
4.5. USING INSIGHTS TO IDENTIFY ISSUES WITH YOUR CLUSTER 26
4.5.1. About Red Hat Insights Advisor for OpenShift Container Platform 26
4.5.2. Understanding Insights Advisor recommendations 26
4.5.3. Displaying potential issues with your cluster 27
4.5.4. Displaying all Insights Advisor recommendations 27
4.5.5. Advisor recommendation filters 28
4.5.5.1. Filtering Insights advisor recommendations 28
4.5.5.2. Removing filters from Insights Advisor recommendations 29
4.5.6. Disabling Insights Advisor recommendations 29
4.5.7. Enabling a previously disabled Insights Advisor recommendation 30
4.5.8. Displaying the Insights status in the web console 30
4.6. USING THE INSIGHTS OPERATOR 31
4.6.1. Configuring Insights Operator 31
4.6.1.1. Creating the insights-config ConfigMap object 32
4.6.2. Understanding Insights Operator alerts 33
1
OpenShift Container Platform 4.17 Support
. . . . . . . . . . . 5.
CHAPTER . . GATHERING
. . . . . . . . . . . . . .DATA
. . . . . . ABOUT
. . . . . . . . YOUR
. . . . . . .CLUSTER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
..............
5.1. ABOUT THE MUST-GATHER TOOL 53
5.1.1. Gathering data about your cluster for Red Hat Support 54
5.1.2. Must-gather flags 55
5.1.3. Gathering data about specific features 56
5.2. ADDITIONAL RESOURCES 62
5.2.1. Gathering network logs 62
5.2.2. Changing the must-gather storage limit 62
5.3. OBTAINING YOUR CLUSTER ID 63
5.4. ABOUT SOSREPORT 63
5.5. GENERATING A SOSREPORT ARCHIVE FOR AN OPENSHIFT CONTAINER PLATFORM CLUSTER NODE
64
5.6. QUERYING BOOTSTRAP NODE JOURNAL LOGS 66
5.7. QUERYING CLUSTER NODE JOURNAL LOGS 67
5.8. NETWORK TRACE METHODS 68
5.9. COLLECTING A HOST NETWORK TRACE 69
5.10. COLLECTING A NETWORK TRACE FROM AN OPENSHIFT CONTAINER PLATFORM NODE OR
CONTAINER 70
5.11. PROVIDING DIAGNOSTIC DATA TO RED HAT SUPPORT 73
5.12. ABOUT TOOLBOX 74
Installing packages to a toolbox container 74
Starting an alternative image with toolbox 75
.CHAPTER
. . . . . . . . . . 6.
. . .SUMMARIZING
. . . . . . . . . . . . . . . CLUSTER
. . . . . . . . . . .SPECIFICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
..............
6.1. SUMMARIZING CLUSTER SPECIFICATIONS BY USING A CLUSTER VERSION OBJECT 76
. . . . . . . . . . . 7.
CHAPTER . . TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78
..............
7.1. TROUBLESHOOTING INSTALLATIONS 78
7.1.1. Determining where installation issues occur 78
7.1.2. User-provisioned infrastructure installation considerations 78
7.1.3. Checking a load balancer configuration before OpenShift Container Platform installation 79
7.1.4. Specifying OpenShift Container Platform installer log levels 80
7.1.5. Troubleshooting openshift-install command issues 80
7.1.6. Monitoring installation progress 81
7.1.7. Gathering bootstrap node diagnostic data 82
2
Table of Contents
3
OpenShift Container Platform 4.17 Support
4
Table of Contents
5
OpenShift Container Platform 4.17 Support
Telemetry: The Telemetry Client gathers and uploads the metrics values to Red Hat every four
minutes and thirty seconds. Red Hat uses this data to:
Insight Operator: By default, OpenShift Container Platform installs and enables the Insight
Operator, which reports configuration and component failure status every two hours. The
Insight Operator helps to:
Provide a solution and preventive action in Red Hat OpenShift Cluster Manager.
If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable
remote health reporting.
The must-gather tool: Use the must-gather tool to collect information about your cluster and
to debug the issues.
sosreport: Use the sosreport tool to collect configuration details, system information, and
diagnostic data for debugging purposes.
Cluster ID: Obtain the unique identifier for your cluster, when providing information to Red Hat
Support.
Bootstrap node journal logs: Gather bootkube.service journald unit logs and container logs
6
CHAPTER 1. SUPPORT OVERVIEW
Bootstrap node journal logs: Gather bootkube.service journald unit logs and container logs
from the bootstrap node to troubleshoot bootstrap-related issues.
Cluster node journal logs: Gather journald unit logs and logs within /var/log on individual
cluster nodes to troubleshoot node-related issues.
A network trace: Provide a network packet trace from a specific OpenShift Container Platform
cluster node or a container to Red Hat Support to help troubleshoot network-related issues.
Diagnostic data: Use the redhat-support-tool command to gather(?) diagnostic data about
your cluster.
Installation issues: OpenShift Container Platform installation proceeds through various stages.
You can perform the following:
Node issues: A cluster administrator can verify and troubleshoot node-related issues by
reviewing the status, resource usage, and configuration of a node. You can query the following:
Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each
cluster node. If you experience container runtime issues, perform the following:
Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux
CoreOS. If you experience operating system issues, you can investigate kernel crash
procedures. Ensure the following:
Enable kdump.
Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the
following:
7
OpenShift Container Platform 4.17 Support
Operator issues: A cluster administrator can do the following to resolve Operator issues:
Pod issues: A cluster administrator can troubleshoot pod-related issues by reviewing the status
of a pod and completing the following:
Source-to-image issues : A cluster administrator can observe the S2I stages to determine where
in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I)
issues:
Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is
not possible because the failed node cannot unmount the attached volume. A cluster
administrator can do the following to resolve multi-attach storage issues:
Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting
page for monitoring. If the metrics for your user-defined projects are unavailable or if
Prometheus is consuming a lot of disk space, check the following:
OpenShift CLI (oc) issues: Investigate OpenShift CLI (oc) issues by increasing the log level.
8
CHAPTER 2. MANAGING YOUR CLUSTER RESOURCES
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have access to the web console or you have installed the oc CLI tool.
Procedure
1. To see which configuration Operators have been applied, run the following command:
2. To see what cluster resources you can configure, run the following command:
$ oc explain <resource_name>.config.openshift.io
3. To see the configuration of custom resource definition (CRD) objects in the cluster, run the
following command:
9
OpenShift Container Platform 4.17 Support
Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red
Hat products.
To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager. Insights provides
details about issues and, if available, information on how to solve a problem.
If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for
the most relevant documentation component. Please provide specific details, such as the section name
and OpenShift Container Platform version.
Prerequisites
Procedure
2. Click Search.
3. In the search field, input keywords and strings relating to the problem, including:
10
CHAPTER 3. GETTING SUPPORT
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Log in to the Customer Support page of the Red Hat Customer Portal.
b. Select the appropriate category for your issue, such as Bug or Defect, and click Continue.
a. In the Summary field, enter a concise but descriptive problem summary and further details
about the symptoms being experienced, as well as your expectations.
5. Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the
problem that is being reported. If the suggested articles do not address the issue, click
Continue.
6. Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match
against the problem that is being reported. The list is refined as you provide more information
during the case creation process. If the suggested articles do not address the issue, click
Continue.
7. Ensure that the account information presented is as expected, and if not, amend accordingly.
8. Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not,
manually obtain your cluster ID.
To manually obtain your cluster ID using the OpenShift Container Platform web console:
11
OpenShift Container Platform 4.17 Support
Alternatively, it is possible to open a new support case through the OpenShift Container
Platform web console and have your cluster ID autofilled.
To obtain your cluster ID using the OpenShift CLI (oc), run the following command:
9. Complete the following questions where prompted and then click Continue:
10. Upload relevant diagnostic data files and click Continue. It is recommended to include data
gathered using the oc adm must-gather command as a starting point, plus any issue specific
data that is not collected by that command.
12
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a
connected cluster.
Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the
OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected
clusters to Red Hat to enable subscription management automation, monitor the health of clusters,
assist with support, and improve customer experience.
The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red
Hat. The data is used to produce insights about potential issues that a cluster might be exposed to.
These insights are communicated to cluster administrators on OpenShift Cluster Manager.
Enhanced identification and resolution of issues. Events that might seem normal to an end-
user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some
issues can be more rapidly identified from this point of view and resolved without an end-user
needing to open a support case or file a Jira issue .
Advanced release management. OpenShift Container Platform offers the candidate, fast, and
stable release channels, which enable you to choose an update strategy. The graduation of a
release from fast to stable is dependent on the success rate of updates and on the events seen
during upgrades. With the information provided by connected clusters, Red Hat can improve the
quality of releases to stable channels and react more rapidly to issues found in the fast
channels.
Targeted prioritization of new features and functionality. The data collected provides
insights about which areas of OpenShift Container Platform are used most. With this
information, Red Hat can focus on developing the new features and functionality that have the
greatest impact for our customers.
A streamlined support experience. You can provide a cluster ID for a connected cluster when
creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a
streamlined support experience that is specific to your cluster, by using the connected
information. This document provides more information about that enhanced support
experience.
Predictive analytics. The insights displayed for your cluster on OpenShift Cluster Manager are
enabled by the information collected from connected clusters. Red Hat is investing in applying
deep learning, machine learning, and artificial intelligence automation to help identify issues
that OpenShift Container Platform clusters are exposed to.
13
OpenShift Container Platform 4.17 Support
This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to
problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform
upgrades to customers to minimize service impact and continuously improve the upgrade experience.
This debugging information is available to Red Hat Support and Engineering teams with the same
restrictions as accessing data reported through support cases. All connected cluster information is used
by Red Hat to help make OpenShift Container Platform better and more intuitive to use.
Additional resources
See the OpenShift Container Platform update documentation for more information about
updating or upgrading a cluster.
Version information, including the OpenShift Container Platform cluster version and installed
update details that are used to determine update version availability
Update information, including the number of updates available per cluster, the channel and
image repository used for an update, update progress information, and the number of errors
that occur in an update
Configuration details that help Red Hat Support to provide beneficial support for customers,
including node configuration at the cloud infrastructure level, hostnames, IP addresses,
Kubernetes pod names, namespaces, and services
The OpenShift Container Platform framework components installed in a cluster and their
condition and status
Events for all namespaces listed as "related objects" for a degraded Operator
The name of the provider platform that OpenShift Container Platform is deployed on and the
data center location
Sizing information about clusters, machine types, and machines, including the number of CPU
cores and the amount of RAM used for each
The number of etcd members and the number of objects stored in the etcd cluster
14
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not
intend to collect personal information. If Red Hat discovers that personal information has been
inadvertently received, Red Hat will delete such information. To the extent that any telemetry data
constitutes personal data, please refer to the Red Hat Privacy Statement for more information about
Red Hat’s privacy practices.
Additional resources
See Showing data collected by Telemetry for details about how to list the attributes that
Telemetry gathers from Prometheus in OpenShift Container Platform.
See the upstream cluster-monitoring-operator source code for a list of the attributes that
Telemetry gathers from Prometheus.
Telemetry is installed and enabled by default. If you need to opt out of remote health reporting,
see Opting out of remote health reporting .
Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor
service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further
details and, if available, steps on how to solve a problem.
The Insights Operator does not collect identifying information, such as user names, passwords, or
certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights
data collection and controls.
Identify potential cluster issues and provide a solution and preventive actions in the Insights
Advisor service on Red Hat Hybrid Cloud Console
Additional resources
The Insights Operator is installed and enabled by default. If you need to opt out of remote
health reporting, see Opting out of remote health reporting .
15
OpenShift Container Platform 4.17 Support
General information about your cluster and its components to identify issues that are specific to
your OpenShift Container Platform version and environment
Configuration files, such as the image registry configuration, of your cluster to determine
incorrect settings and issues that are specific to parameters you set
Progress information of running updates, and the status of any component upgrades
Details of the platform that OpenShift Container Platform is deployed on and the region that
the cluster is located in
Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values,
which allows Red Hat to assess workloads for security and version vulnerabilities without
disclosing sensitive details
Additional resources
See Showing data collected by the Insights Operator for details about how to review the data
that is collected by the Insights Operator.
The Insights Operator source code is available for review and contribution. See the Insights
Operator upstream project for a list of the items collected by the Insights Operator.
The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an
archive. The archive is uploaded to OpenShift Cluster Manager every two hours for processing. The
Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager. This is
used to populate the Insights status pop-up that is included in the Overview page in the OpenShift
Container Platform web console.
All of the communication with Red Hat occurs over encrypted channels by using Transport Layer
Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest.
Access to the systems that handle customer data is controlled through multi-factor authentication and
strict authorization controls. Access is granted on a need-to-know basis and is limited to required
operations.
16
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
Additional resources
See Monitoring overview for more information about the OpenShift Container Platform
monitoring stack.
See Configuring your firewall for details about configuring a firewall and enabling endpoints for
Telemetry and Insights
4.1.4. Additional details about how remote health monitoring data is used
The information collected to enable remote health monitoring is detailed in Information collected by
Telemetry and Information collected by the Insights Operator .
As further described in the preceding sections of this document, Red Hat collects data about your use of
the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance
or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting,
improving the offerings and user experience, responding to issues, and for billing purposes if applicable.
Collection safeguards
Red Hat employs technical and organizational measures designed to protect the telemetry and
configuration data.
Sharing
Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red
Hat to improve your user experience. Red Hat may share telemetry and configuration data with its
business partners in an aggregated form that does not identify customers to help the partners better
understand their markets and their customers’ use of Red Hat offerings or to ensure the successful
integration of products jointly supported by those partners.
Third parties
Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the
Telemetry and configuration data.
User control / enabling and disabling telemetry and configuration data collection
You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the
17
OpenShift Container Platform 4.17 Support
You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the
instructions in Opting out of remote health reporting .
Prerequisites
You have access to the cluster as a user with the cluster-admin role or the cluster-monitoring-
view role.
Procedure
1. Log in to a cluster.
2. Run the following command, which queries a cluster’s Prometheus service and returns the full
set of time series data captured by Telemetry:
NOTE
The following example contains some values that are specific to OpenShift Container
Platform on AWS.
18
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
--data-urlencode 'match[]={__name__="workload:cpu_usage_cores:sum"}' \
--data-urlencode 'match[]={__name__="workload:memory_usage_bytes:sum"}' \
--data-urlencode 'match[]={__name__="cluster:virt_platform_nodes:sum"}' \
--data-urlencode 'match[]={__name__="cluster:node_instance_type_count:sum"}' \
--data-urlencode 'match[]={__name__="cnv:vmi_status_running:count"}' \
--data-urlencode 'match[]={__name__="cluster:vmi_request_cpu_cores:sum"}' \
--data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}' \
--data-urlencode 'match[]=
{__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}' \
--data-urlencode 'match[]={__name__="subscription_sync_total"}' \
--data-urlencode 'match[]={__name__="olm_resolution_duration_seconds"}' \
--data-urlencode 'match[]={__name__="csv_succeeded"}' \
--data-urlencode 'match[]={__name__="csv_abnormal"}' \
--data-urlencode 'match[]=
{__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}'
\
--data-urlencode 'match[]={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}'
\
--data-urlencode 'match[]={__name__="ceph_cluster_total_bytes"}' \
--data-urlencode 'match[]={__name__="ceph_cluster_total_used_raw_bytes"}' \
--data-urlencode 'match[]={__name__="ceph_health_status"}' \
--data-urlencode 'match[]={__name__="odf_system_raw_capacity_total_bytes"}' \
--data-urlencode 'match[]={__name__="odf_system_raw_capacity_used_bytes"}' \
--data-urlencode 'match[]={__name__="odf_system_health_status"}' \
--data-urlencode 'match[]={__name__="job:ceph_osd_metadata:count"}' \
--data-urlencode 'match[]={__name__="job:kube_pv:count"}' \
--data-urlencode 'match[]={__name__="job:odf_system_pvs:count"}' \
--data-urlencode 'match[]={__name__="job:ceph_pools_iops:total"}' \
--data-urlencode 'match[]={__name__="job:ceph_pools_iops_bytes:total"}' \
--data-urlencode 'match[]={__name__="job:ceph_versions_running:count"}' \
--data-urlencode 'match[]={__name__="job:noobaa_total_unhealthy_buckets:sum"}' \
--data-urlencode 'match[]={__name__="job:noobaa_bucket_count:sum"}' \
--data-urlencode 'match[]={__name__="job:noobaa_total_object_count:sum"}' \
--data-urlencode 'match[]={__name__="odf_system_bucket_count", system_type="OCS",
system_vendor="Red Hat"}' \
--data-urlencode 'match[]={__name__="odf_system_objects_total", system_type="OCS",
system_vendor="Red Hat"}' \
--data-urlencode 'match[]={__name__="noobaa_accounts_num"}' \
--data-urlencode 'match[]={__name__="noobaa_total_usage"}' \
--data-urlencode 'match[]={__name__="console_url"}' \
--data-urlencode 'match[]={__name__="cluster:ovnkube_master_egress_routing_via_host:max"}' \
--data-urlencode 'match[]={__name__="cluster:network_attachment_definition_instances:max"}' \
--data-urlencode 'match[]=
{__name__="cluster:network_attachment_definition_enabled_instance_up:max"}' \
--data-urlencode 'match[]={__name__="cluster:ingress_controller_aws_nlb_active:sum"}' \
--data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:min"}' \
--data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:max"}' \
--data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:avg"}' \
--data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:median"}'
\
--data-urlencode 'match[]={__name__="cluster:openshift_route_info:tls_termination:sum"}' \
--data-urlencode 'match[]={__name__="insightsclient_request_send_total"}' \
--data-urlencode 'match[]={__name__="cam_app_workload_migrations"}' \
--data-urlencode 'match[]=
{__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}' \
--data-urlencode 'match[]={__name__="cluster:alertmanager_integrations:max"}' \
19
OpenShift Container Platform 4.17 Support
--data-urlencode 'match[]={__name__="cluster:telemetry_selected_series:count"}' \
--data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_series:sum"}' \
--data-urlencode 'match[]=
{__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}' \
--data-urlencode 'match[]={__name__="monitoring:container_memory_working_set_bytes:sum"}' \
--data-urlencode 'match[]={__name__="namespace_job:scrape_series_added:topk3_sum1h"}' \
--data-urlencode 'match[]=
{__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}' \
--data-urlencode 'match[]={__name__="monitoring:haproxy_server_http_responses_total:sum"}' \
--data-urlencode 'match[]={__name__="rhmi_status"}' \
--data-urlencode 'match[]={__name__="status:upgrading:version:rhoam_state:max"}' \
--data-urlencode 'match[]={__name__="state:rhoam_critical_alerts:max"}' \
--data-urlencode 'match[]={__name__="state:rhoam_warning_alerts:max"}' \
--data-urlencode 'match[]={__name__="rhoam_7d_slo_percentile:max"}' \
--data-urlencode 'match[]={__name__="rhoam_7d_slo_remaining_error_budget:max"}' \
--data-urlencode 'match[]={__name__="cluster_legacy_scheduler_policy"}' \
--data-urlencode 'match[]={__name__="cluster_master_schedulable"}' \
--data-urlencode 'match[]={__name__="che_workspace_status"}' \
--data-urlencode 'match[]={__name__="che_workspace_started_total"}' \
--data-urlencode 'match[]={__name__="che_workspace_failure_total"}' \
--data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_sum"}' \
--data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_count"}' \
--data-urlencode 'match[]={__name__="cco_credentials_mode"}' \
--data-urlencode 'match[]={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}' \
--data-urlencode 'match[]={__name__="visual_web_terminal_sessions_total"}' \
--data-urlencode 'match[]={__name__="acm_managed_cluster_info"}' \
--data-urlencode 'match[]={__name__="cluster:vsphere_vcenter_info:sum"}' \
--data-urlencode 'match[]={__name__="cluster:vsphere_esxi_version_total:sum"}' \
--data-urlencode 'match[]={__name__="cluster:vsphere_node_hw_version_total:sum"}' \
--data-urlencode 'match[]={__name__="openshift:build_by_strategy:sum"}' \
--data-urlencode 'match[]={__name__="rhods_aggregate_availability"}' \
--data-urlencode 'match[]={__name__="rhods_total_users"}' \
--data-urlencode 'match[]=
{__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}' \
--data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}' \
--data-urlencode 'match[]=
{__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99
"}' \
--data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}' \
--data-urlencode 'match[]=
{__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.9
9"}' \
--data-urlencode 'match[]={__name__="jaeger_operator_instances_storage_types"}' \
--data-urlencode 'match[]={__name__="jaeger_operator_instances_strategies"}' \
--data-urlencode 'match[]={__name__="jaeger_operator_instances_agent_strategies"}' \
--data-urlencode 'match[]={__name__="appsvcs:cores_by_product:sum"}' \
--data-urlencode 'match[]={__name__="nto_custom_profiles:count"}' \
--data-urlencode 'match[]={__name__="openshift_csi_share_configmap"}' \
--data-urlencode 'match[]={__name__="openshift_csi_share_secret"}' \
--data-urlencode 'match[]={__name__="openshift_csi_share_mount_failures_total"}' \
--data-urlencode 'match[]={__name__="openshift_csi_share_mount_requests_total"}' \
--data-urlencode 'match[]={__name__="cluster:velero_backup_total:max"}' \
--data-urlencode 'match[]={__name__="cluster:velero_restore_total:max"}' \
--data-urlencode 'match[]={__name__="eo_es_storage_info"}' \
--data-urlencode 'match[]={__name__="eo_es_redundancy_policy_info"}' \
--data-urlencode 'match[]={__name__="eo_es_defined_delete_namespaces_total"}' \
20
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
--data-urlencode 'match[]={__name__="eo_es_misconfigured_memory_resources_info"}' \
--data-urlencode 'match[]={__name__="cluster:eo_es_data_nodes_total:max"}' \
--data-urlencode 'match[]={__name__="cluster:eo_es_documents_created_total:sum"}' \
--data-urlencode 'match[]={__name__="cluster:eo_es_documents_deleted_total:sum"}' \
--data-urlencode 'match[]={__name__="pod:eo_es_shards_total:max"}' \
--data-urlencode 'match[]={__name__="eo_es_cluster_management_state_info"}' \
--data-urlencode 'match[]={__name__="imageregistry:imagestreamtags_count:sum"}' \
--data-urlencode 'match[]={__name__="imageregistry:operations_count:sum"}' \
--data-urlencode 'match[]={__name__="log_logging_info"}' \
--data-urlencode 'match[]={__name__="log_collector_error_count_total"}' \
--data-urlencode 'match[]={__name__="log_forwarder_pipeline_info"}' \
--data-urlencode 'match[]={__name__="log_forwarder_input_info"}' \
--data-urlencode 'match[]={__name__="log_forwarder_output_info"}' \
--data-urlencode 'match[]={__name__="cluster:log_collected_bytes_total:sum"}' \
--data-urlencode 'match[]={__name__="cluster:log_logged_bytes_total:sum"}' \
--data-urlencode 'match[]={__name__="cluster:kata_monitor_running_shim_count:sum"}' \
--data-urlencode 'match[]={__name__="platform:hypershift_hostedclusters:max"}' \
--data-urlencode 'match[]={__name__="platform:hypershift_nodepools:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_bucket_claims:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_buckets_claims:max"}' \
--data-urlencode 'match[]=
{__name__="namespace:noobaa_unhealthy_namespace_resources:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_namespace_resources:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_buckets:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_namespace_buckets:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_accounts:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_usage:max"}' \
--data-urlencode 'match[]={__name__="namespace:noobaa_system_health_status:max"}' \
--data-urlencode 'match[]={__name__="ocs_advanced_feature_usage"}' \
--data-urlencode 'match[]={__name__="os_image_url_override:sum"}' \
--data-urlencode 'match[]={__name__="openshift:openshift_network_operator_ipsec_state:info"}'
Prerequisites
Procedure
1. Find the name of the currently running pod for the Insights Operator:
$ oc cp openshift-insights/$INSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-
data
The recent Insights Operator archives are now available in the insights-data directory.
21
OpenShift Container Platform 4.17 Support
1. Modify the global cluster pull secret to disable remote health reporting.
Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test
clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant
in qualifying OpenShift Container Platform in your environments and react more rapidly to product
issues.
Red Hat will not be able to monitor the success of product upgrades or the health of your
clusters without a support case being opened.
Red Hat will not be able to use configuration data to better triage customer support cases and
identify which configurations our customers find important.
The OpenShift Cluster Manager will not show data about your clusters including health and
usage information.
In restricted networks, Telemetry and Insights data can still be reported through appropriate
configuration of your proxy.
4.3.2. Modifying the global cluster pull secret to disable remote health reporting
You can modify your existing global cluster pull secret to disable remote health reporting. This disables
both Telemetry and the Insights Operator.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Download the global cluster pull secret to your local file system.
22
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
"cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"}
You can now update your cluster to use this modified pull secret.
IMPORTANT
By registering your disconnected cluster, you can continue to report your subscription
usage to Red Hat. In turn, Red Hat can return accurate usage and capacity trends
associated with your subscription, so that you can use the returned information to better
organize subscription allocations across all of your resources.
Prerequisites
You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
1. Go to the Register disconnected cluster web page on the Red Hat Hybrid Cloud Console.
2. Optional: To access the Register disconnected cluster web page from the home page of the
Red Hat Hybrid Cloud Console, go to the Cluster List navigation menu item and then select the
Register cluster button.
3. Enter your cluster’s details in the provided fields on the Register disconnected cluster page.
4. From the Subscription settings section of the page, select the subcription settings that apply
to your Red Hat subscription offering.
Additional resources
How does the subscriptions service show my subscription data? (Getting Started with the
Subscription Service)
23
OpenShift Container Platform 4.17 Support
The procedure is required when users use a separate registry to store images than the registry used
during installation.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Optional: To append a new pull secret to the existing pull secret, complete the following steps:
1 Provide the new registry. You can include multiple repositories within the same
registry, for example: --registry="<registry/my-namespace/my-repository>".
Alternatively, you can perform a manual update to the pull secret file.
2. Enter the following command to update the global pull secret for your cluster:
This update is rolled out to all nodes, which can take some time depending on the size of your
cluster.
NOTE
If you or your organization have disabled remote health reporting, you can enable this feature again. You
24
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
If you or your organization have disabled remote health reporting, you can enable this feature again. You
can see that remote health reporting is disabled from the message "Insights not available" in the Status
tile on the OpenShift Container Platform Web Console Overview page.
To enable remote health reporting, you must Modify the global cluster pull secret with a new
authorization token.
NOTE
Enabling remote health reporting enables both Insights Operator and Telemetry.
4.4.1. Modifying your global cluster pull secret to enable remote health reporting
You can modify your existing global cluster pull secret to enable remote health reporting. If you have
previously disabled remote health monitoring, you must first download a new pull secret with your
console.openshift.com access token from Red Hat OpenShift Cluster Manager.
Prerequisites
Procedure
1. Navigate to https://console.redhat.com/openshift/downloads.
{
"auths": {
"cloud.openshift.com": {
"auth": "<your_token>",
"email": "<email_address>"
}
}
}
3. Download the global cluster pull secret to your local file system.
$ cp pull-secret pull-secret-backup
25
OpenShift Container Platform 4.17 Support
It may take several minutes for the secret to update and your cluster to begin reporting.
Verification
4.5.1. About Red Hat Insights Advisor for OpenShift Container Platform
You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform
clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is
important to be aware of the exposure of your cluster infrastructure to issues that can affect service
availability, fault tolerance, performance, or security.
Using cluster data collected by the Insights Operator, Insights repeatedly compares that data against a
library of recommendations. Each recommendation is a set of cluster-environment conditions that can
leave OpenShift Container Platform clusters at risk. The results of the Insights analysis are available in
the Insights Advisor service on Red Hat Hybrid Cloud Console. In the Console, you can perform the
following actions:
Learn more about individual recommendations, details about the risks they present, and get
resolutions tailored to your individual clusters.
Added: When the recommendation was published to the Insights Advisor archive
Category: Whether the issue has the potential to negatively affect service availability, fault
tolerance, performance, or security
Total risk: A value derived from the likelihood that the condition will negatively affect your
infrastructure, and the impact on operations if that were to happen
26
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
Description: A brief synopsis of the issue, including how it affects your clusters
Link to associated topics: More information from Red Hat about the issue
Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can
change, for example, if you fix an issue or a new issue has been detected.
Prerequisites
Procedure
A list of issues Insights has detected, grouped by risk (low, moderate, important, and
critical).
No clusters yet, if Insights has not yet analyzed the cluster. The analysis starts shortly after
the cluster has been installed, registered, and connected to the internet.
2. If any issues are displayed, click the > icon in front of the entry for more details.
Depending on the issue, the details can also contain a link to more information from Red Hat
about the issue.
Prerequisites
Procedure
27
OpenShift Container Platform 4.17 Support
2. Click the X icons next to the Clusters Impacted and Status filters.
You can now browse through all of the potential recommendations for your cluster.
By default, filters are set to only show enabled recommendations that are impacting one or more
clusters. To view all or disabled recommendations in the Insights library, you can customize the filters.
To apply a filter, select a filter type and then set its value based on the options that are available in the
drop-down list. You can apply multiple filters to the list of recommendations.
Total risk: Select one or more values from Critical, Important, Moderate, and Low indicating
the likelihood and the severity of a negative impact on a cluster.
Impact: Select one or more values from Critical, High, Medium, and Low indicating the
potential impact to the continuity of cluster operations.
Likelihood: Select one or more values from Critical, High, Medium, and Low indicating the
potential for a negative impact to a cluster if the recommendation comes to fruition.
Category: Select one or more categories from Service Availability, Performance, Fault
Tolerance, Security, and Best Practice to focus your attention on.
Clusters impacted: Set the filter to show recommendations currently impacting one or more
clusters, non-impacting recommendations, or all recommendations.
Risk of change: Select one or more values from High, Moderate, Low, and Very low indicating
the risk that the implementation of the resolution could have on cluster operations.
As an OpenShift Container Platform cluster manager, you can filter the recommendations that are
displayed on the recommendations list. By applying filters, you can reduce the number of reported
recommendations and concentrate on your highest priority recommendations.
The following procedure demonstrates how to set and remove Category filters; however, the procedure
is applicable to any of the filter types and respective values.
Prerequisites
You are logged in to the OpenShift Cluster Manager Hybrid Cloud Console .
Procedure
28
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
2. In the main, filter-type drop-down list, select the Category filter type.
3. Expand the filter-value drop-down list and select the checkbox next to each category of
recommendation you want to view. Leave the checkboxes for unnecessary categories clear.
Only recommendations from the selected categories are shown in the list.
Verification
After applying filters, you can view the updated recommendations list. The applied filters are
added next to the default filters.
You can apply multiple filters to the list of recommendations. When ready, you can remove them
individually or completely reset them.
Click the X icon next to each filter, including the default filters, to remove them individually.
Click Reset filters to remove only the filters that you applied, leaving the default filters in place.
NOTE
Disabling a recommendation for all of your clusters also applies to any future clusters.
Prerequisites
Procedure
To disable an alert:
29
OpenShift Container Platform 4.17 Support
a. Click the Options menu for that alert, and then click Disable recommendation.
To view the clusters affected by this alert before disabling the alert:
a. Click the name of the recommendation to disable. You are directed to the single
recommendation page.
c. Click Actions → Disable recommendation to disable the alert for all of your clusters.
Prerequisites
Procedure
Prerequisites
30
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
Procedure
Additional resources
The Insights Operator is installed and enabled by default. If you need to opt out of remote
health reporting, see Opting out of remote health reporting .
For more information on using Insights Advisor to identify issues with your cluster, see Using
Insights to identify issues with your cluster.
When a ConfigMap object or support secret exists, the contained attribute values override the default
Operator configuration values. If both a ConfigMap object and a support secret exist, the Operator
reads the ConfigMap object.
The ConfigMap object does not exist by default, so an OpenShift Container Platform cluster
administrator must create it.
31
OpenShift Container Platform 4.17 Support
NOTE
The insights-config ConfigMap object follows standard YAML formatting, wherein child
values are below the parent attribute and indented two spaces. For the Obfuscation
attribute, enter values as bulleted children of the parent attribute.
This procedure describes how to create the insights-config ConfigMap object for the Insights
Operator to set custom configurations.
IMPORTANT
Red Hat recommends you consult Red Hat Support before making changes to the
default Insights Operator configuration.
Prerequisites
32
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
You are logged in to the OpenShift Container Platform web console as a user with cluster-
admin role.
Procedure
3. Select Configure via: YAML view and enter your configuration preferences, for example
apiVersion: v1
kind: ConfigMap
metadata:
name: insights-config
namespace: openshift-insights
data:
config.yaml: |
dataReporting:
obfuscation:
- networking
- workload_names
sca:
disable: false
interval: 2h
alerting:
disabled: false
binaryData: {}
immutable: false
4. Optional: Select Form view and enter the necessary information that way.
7. For the Value field, either browse for a file to drag and drop into the field or enter your
configuration parameters manually.
8. Click Create and you can see the ConfigMap object and configuration information.
Currently, Insights Operator sends the following alerts when the conditions are met:
33
OpenShift Container Platform 4.17 Support
Alert Description
To prevent the Insights Operator from sending alerts to the cluster Prometheus instance, you create or
edit the insights-config ConfigMap object.
NOTE
If the insights-config ConfigMap object does not exist, you must create it when you first add custom
configurations. Note that configurations within the ConfigMap object take precedence over the default
settings defined in the config/pod.yaml file.
Prerequisites
You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
apiVersion: v1
kind: ConfigMap
# ...
data:
config.yaml: |
34
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
alerting:
disabled: true
# ...
7. Verify that the value of the config.yaml alerting attribute is set to disabled: true.
After you save the changes, Insights Operator no longer sends alerts to the cluster Prometheus
instance.
When alerts are disabled, the Insights Operator no longer sends alerts to the cluster Prometheus
instance. You can reenable them.
NOTE
Prerequisites
You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
apiVersion: v1
kind: ConfigMap
# ...
data:
config.yaml: |
alerting:
disabled: false
# ...
35
OpenShift Container Platform 4.17 Support
7. Verify that the value of the config.yaml alerting attribute is set to disabled: false.
After you save the changes, Insights Operator again sends alerts to the cluster Prometheus instance.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Find the name of the running pod for the Insights Operator:
$ oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-
data 1
1 Replace <insights_operator_pod_name> with the pod name output from the preceding
command.
The recent Insights Operator archives are now available in the insights-data directory.
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
NOTE
36
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
NOTE
If you enable Technology Preview in your cluster, the Insights Operator runs gather
operations in individual pods. This is part of the Technology Preview feature set for the
Insights Operator and supports the new data gathering features.
You can view the time it takes for the Insights Operator to gather the information contained in the
archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor.
Prerequisites
Procedure
{
"name": "clusterconfig/authentication",
"duration_in_ms": 730, 1
"records_count": 1,
"errors": null,
"panic": null
}
4.6.4.2. Running an Insights Operator gather operation from the web console
To collect data, you can run an Insights Operator gather operation by using the OpenShift Container
Platform web console.
Prerequisites
You are logged in to the OpenShift Container Platform web console as a user with the cluster-
admin role.
Procedure
2. On the CustomResourceDefinitions page, in the Search by name field, find the DataGather
resource definition, and then click it.
4. On the CustomResourceDefinitions page, in the Search by name field, find the DataGather
resource definition, and then click it.
37
OpenShift Container Platform 4.17 Support
7. To create a new DataGather operation, edit the following configuration file and then save your
changes.
apiVersion: insights.openshift.io/v1alpha1
kind: DataGather
metadata:
name: <your_data_gather> 1
spec:
gatherers: 2
- name: workloads
state: Disabled
1 Under metadata, replace <your_data_gather> with a unique name for the gather
operation.
2 Under gatherers, specify any individual gather operations that you intend to disable. In the
example provided, workloads is the only data gather operation that is disabled and all of
the other default operations are set to run. When the spec parameter is empty, all of the
default gather operations run.
IMPORTANT
Do not add a prefix of periodic-gathering- to the name of your gather operation because
this string is reserved for other administrative operations and might impact the intended
gather operation.
Verification
2. On the Pods page, go to the Project pull-down menu, and then select Show default projects.
5. On the Pods page, go to the Project pull-down menu, and then select Show default projects.
7. Check that your new gather operation is prefixed with your chosen name under the list of pods
in the openshift-insights project. Upon completion, the Insights Operator automatically
uploads the data to Red Hat for processing.
4.6.4.3. Running an Insights Operator gather operation from the OpenShift CLI
You can run an Insights Operator gather operation by using the OpenShift Container Platform command
line interface.
Prerequisites
38
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
You are logged in to OpenShift Container Platform as a user with the cluster-admin role.
Procedure
$ oc apply -f <your_datagather_definition>.yaml
apiVersion: insights.openshift.io/v1alpha1
kind: DataGather
metadata:
name: <your_data_gather> 1
spec:
gatherers: 2
- name: workloads
state: Disabled
1 Under metadata, replace <your_data_gather> with a unique name for the gather
operation.
2 Under gatherers, specify any individual gather operations that you intend to disable. In the
example provided, workloads is the only data gather operation that is disabled and all of
the other default operations are set to run. When the spec parameter is empty, all of the
default gather operations run.
IMPORTANT
Do not add a prefix of periodic-gathering- to the name of your gather operation because
this string is reserved for other administrative operations and might impact the intended
gather operation.
Verification
Check that your new gather operation is prefixed with your chosen name under the list of pods
in the openshift-insights project. Upon completion, the Insights Operator automatically
uploads the data to Red Hat for processing.
Additional resources
You can disable the Insights Operator gather operations. Disabling the gather operations gives you the
ability to increase privacy for your organization as Insights Operator will no longer gather and send
Insights cluster reports to Red Hat. This will disable Insights analysis and recommendations for your
cluster without affecting other core functions that require communication with Red Hat such as cluster
transfers. You can view a list of attempted gather operations for your cluster from the /insights-
operator/gathers.json file in your Insights Operator archive. Be aware that some gather operations only
occur when certain conditions are met and might not appear in your most recent archive.
39
OpenShift Container Platform 4.17 Support
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
NOTE
If you enable Technology Preview in your cluster, the Insights Operator runs gather
operations in individual pods. This is part of the Technology Preview feature set for the
Insights Operator and supports the new data gathering features.
Prerequisites
You are logged in to the OpenShift Container Platform web console as a user with the cluster-
admin role.
Procedure
2. On the CustomResourceDefinitions page, use the Search by name field to find the
InsightsDataGather resource definition and click it.
5. Disable the gather operations by performing one of the following edits to the
InsightsDataGather configuration file:
a. To disable all the gather operations, enter all under the disabledGatherers key:
apiVersion: config.openshift.io/v1alpha1
kind: InsightsDataGather
metadata:
....
spec: 1
gatherConfig:
disabledGatherers:
- all 2
b. To disable individual gather operations, enter their values under the disabledGatherers
key:
40
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
spec:
gatherConfig:
disabledGatherers:
- clusterconfig/container_images 1
- clusterconfig/host_subnets
- workloads/workload_info
6. Click Save.
After you save the changes, the Insights Operator gather configurations are updated and the
operations will no longer occur.
NOTE
You can enable the Insights Operator gather operations, if the gather operations have been disabled.
IMPORTANT
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
Prerequisites
You are logged in to the OpenShift Container Platform web console as a user with the cluster-
admin role.
Procedure
2. On the CustomResourceDefinitions page, use the Search by name field to find the
InsightsDataGather resource definition and click it.
41
OpenShift Container Platform 4.17 Support
apiVersion: config.openshift.io/v1alpha1
kind: InsightsDataGather
metadata:
....
spec:
gatherConfig: 1
disabledGatherers: all
To enable individual gather operations, remove their values under the disabledGatherers
key:
spec:
gatherConfig:
disabledGatherers:
- clusterconfig/container_images 1
- clusterconfig/host_subnets
- workloads/workload_info
6. Click Save.
After you save the changes, the Insights Operator gather configurations are updated and the
affected gather operations start.
NOTE
Prerequisites
You are logged in to the OpenShift Container Platform web console with the "cluster-admin"
role.
The cluster is self managed and the Deployment Validation Operator is installed.
Procedure
42
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
5. In the file, set the obfuscation attribute with the workload_names value.
apiVersion: v1
kind: ConfigMap
# ...
data:
config.yaml: |
dataReporting:
obfuscation:
- workload_names
# ...
7. Verify that the value of the config.yaml obfuscation attribute is set to - workload_names.
Additionally, you can choose to obfuscate the Insights Operator data before upload.
Prerequisites
Procedure
apiVersion: batch/v1
kind: Job
metadata:
name: insights-operator-job
43
OpenShift Container Platform 4.17 Support
annotations:
config.openshift.io/inject-proxy: insights-operator
spec:
backoffLimit: 6
ttlSecondsAfterFinished: 600
template:
spec:
restartPolicy: OnFailure
serviceAccountName: operator
nodeSelector:
beta.kubernetes.io/os: linux
node-role.kubernetes.io/master: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 900
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 900
volumes:
- name: snapshots
emptyDir: {}
- name: service-ca-bundle
configMap:
name: service-ca-bundle
optional: true
initContainers:
- name: insights-operator
image: quay.io/openshift/origin-insights-operator:latest
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: snapshots
mountPath: /var/lib/insights-operator
- name: service-ca-bundle
mountPath: /var/run/configmaps/service-ca-bundle
readOnly: true
ports:
- containerPort: 8443
name: https
resources:
requests:
cpu: 10m
memory: 70Mi
args:
- gather
- -v=4
- --config=/etc/insights-operator/server.yaml
containers:
- name: sleepy
image: quay.io/openshift/origin-base:latest
args:
44
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
- /bin/sh
- -c
- sleep 10m
volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]
Example output
apiVersion: apps/v1
kind: Deployment
metadata:
name: insights-operator
namespace: openshift-insights
# ...
spec:
template:
# ...
spec:
containers:
- args:
# ...
image: registry.ci.openshift.org/ocp/4.15-2023-10-12-
212500@sha256:a0aa581400805ad0... 1
# ...
apiVersion: batch/v1
kind: Job
metadata:
name: insights-operator-job
# ...
spec:
# ...
template:
spec:
initContainers:
- name: insights-operator
image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-
212500@sha256:a0aa581400805ad0... 1
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
45
OpenShift Container Platform 4.17 Support
Example output
Name: insights-operator-job
Namespace: openshift-insights
# ...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-
<your_job>
where
insights-operator-job-<your_job> is the name of the pod.
Example output
$ oc cp openshift-insights/insights-operator-job-<your_job>:/var/lib/insights-operator
./insights-data
Prerequisites
Procedure
46
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
{
"auths": {
"cloud.openshift.com": {
"auth": "<your_token>",
"email": "[email protected]"
}
}
where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and
<path_to_archive> is the path to the Insights Operator archive.
Example output
Verification steps
1. Log in to https://console.redhat.com/openshift.
Your cluster passed all recommendations, if Insights Advisor did not identify any issues.
A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate,
important, and critical).
47
OpenShift Container Platform 4.17 Support
WARNING
Obfuscation assigns non-identifying values to cluster IPv4 addresses, and uses a translation table that is
retained in memory to change IP addresses to their obfuscated versions throughout the Insights
Operator archive before uploading the data to console.redhat.com.
For cluster base domains, obfuscation changes the base domain to a hardcoded substring. For example,
cluster-api.openshift.example.com becomes cluster-api.<CLUSTER_BASE_DOMAIN>.
The following procedure enables obfuscation using the support secret in the openshift-config
namespace.
Prerequisites
You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
3. Search for the support secret using the Search by name field. If it does not exist, click Create
→ Key/value secret to create it.
6. Create a key named enableGlobalObfuscation with a value of true, and click Save.
10. To restart the insights-operator pod, click the Options menu , and then click Delete Pod.
Verification
48
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
3. Search for the obfuscation-translation-table secret using the Search by name field.
Alternatively, you can inspect /insights-operator/gathers.json in your Insights Operator archive for the
value "is_global_obfuscation_enabled": true.
Additional resources
For more information on how to download your Insights Operator archive, see Showing data
collected by the Insights Operator.
NOTE
The Insights Operator imports simple content access entitlements every eight hours, but can be
configured or disabled using the insights-config ConfigMap object in the openshift-insights
namespace.
NOTE
Simple content access must be enabled in Red Hat Subscription Management for the
importing to function.
Additional resources
See About simple content access in the Red Hat Subscription Central documentation, for more
information about simple content access.
See Using Red Hat subscriptions in builds for more information about using simple content
access entitlements in OpenShift Container Platform builds.
This procedure describes how to update the import interval to two hours (2h). You can specify hours (h)
49
OpenShift Container Platform 4.17 Support
This procedure describes how to update the import interval to two hours (2h). You can specify hours (h)
or hours and minutes, for example: 2h30m.
Prerequisites
You are logged in to the OpenShift Container Platform web console as a user with the cluster-
admin role.
Procedure
5. Set the sca attribute in the file to interval: 2h to import content every two hours.
apiVersion: v1
kind: ConfigMap
# ...
data:
config.yaml: |
sca:
interval: 2h
# ...
7. Verify that the value of the config.yaml sca attribute is set to interval: 2h.
Prerequisites
You are logged in to the OpenShift Container Platform web console as cluster-admin.
Procedure
50
CHAPTER 4. REMOTE HEALTH MONITORING WITH CONNECTED CLUSTERS
apiVersion: v1
kind: ConfigMap
# ...
data:
config.yaml: |
sca:
disabled: true
# ...
7. Verify that the value of the config.yaml sca attribute is set to disabled: true.
Prerequisites
You are logged in to the OpenShift Container Platform web console as a user with the cluster-
admin role.
Procedure
apiVersion: v1
kind: ConfigMap
# ...
data:
config.yaml: |
sca:
disabled: false
# ...
51
OpenShift Container Platform 4.17 Support
7. Verify that the value of the config.yaml sca attribute is set to disabled: false.
52
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
It is recommended to provide:
Resource definitions
Service logs
By default, the oc adm must-gather command uses the default plugin image and writes into ./must-
gather.local.
Alternatively, you can collect specific information by running the command with the appropriate
arguments as described in the following sections:
To collect data related to one or more specific features, use the --image argument with an
image, as listed in a following section.
For example:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0
NOTE
Audit logs are not collected as part of the default set of information to reduce
the size of the files.
When you run oc adm must-gather, a new pod with a random name is created in a new project on the
cluster. The data is collected on that pod and saved in a new directory that starts with must-
gather.local in the current working directory.
For example:
53
OpenShift Container Platform 4.17 Support
Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-
namespace option.
For example:
5.1.1. Gathering data about your cluster for Red Hat Support
You can gather debugging information about your cluster by using the oc adm must-gather CLI
command.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Navigate to the directory where you want to store the must-gather data.
NOTE
$ oc adm must-gather
IMPORTANT
If you are in a disconnected environment, use the --image flag as part of must-
gather and point to the payload image.
NOTE
Because this command picks a random control plane node by default, the pod
might be scheduled to a control plane node that is in the NotReady and
SchedulingDisabled state.
a. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the
oc adm inspect command to gather information for particular resources.
54
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
NOTE
3. Create a compressed file from the must-gather directory that was just created in your working
directory. For example, on a computer that uses a Linux operating system, run the following
command:
4. Attach the compressed file to your support case on the the Customer Support page of the
Red Hat Customer Portal.
--all-images oc adm must- Collect must-gather data using the default image for all
gather --all- Operators on the cluster that are annotated with
images=false operators.openshift.io/must-gather-image.
--dest-dir oc adm must- Set a specific directory on the local machine where the gathered
gather --dest- data is written.
dir='<directory_
name>'
--image oc adm must- Specify a must-gather plugin image to run. If not specified,
gather --image= OpenShift Container Platform’s default must-gather image is
[<plugin_image used.
>]
55
OpenShift Container Platform 4.17 Support
--node-name oc adm must- Set a specific node to use. If not specified, by default a random
gather --node- master is used.
name='<node>'
--node-selector oc adm must- Set a specific node selector to use. Only relevant when
gather --node- specifying a command and image which needs to capture data
selector='<node on a set of cluster nodes simultaneously.
_selector_name
>'
--since oc adm must- Only return logs newer than the specified duration. Defaults to
gather --since= all logs. Plugins are encouraged but not required to support this.
<time> Only one since-time or since may be used.
--since-time oc adm must- Only return logs after a specific date and time, expressed in
gather --since- (RFC3339) format. Defaults to all logs. Plugins are encouraged
time='<date_an but not required to support this. Only one since-time or since
d_time>' may be used.
--source-dir oc adm must- Set the specific directory on the pod where you copy the
gather --source- gathered data from.
dir='/<directory_
name>/'
--timeout oc adm must- The length of time to gather data before timing out, expressed
gather -- as seconds, minutes, or hours, for example, 3s, 5m, or 2h. Time
timeout='<time> specified must be higher than zero. Defaults to 10 minutes if not
' specified.
--volume- oc adm must- Specify maximum percentage of pod’s allocated volume that
percentage gather -- can be used for must-gather. If this limit is exceeded, must-
volume- gather stops gathering, but still copies gathered data. Defaults
percentage= to 30% if not specified.
<percent>
56
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
Image Purpose
57
OpenShift Container Platform 4.17 Support
Image Purpose
NOTE
To determine the latest version for an OpenShift Container Platform component’s image,
see the OpenShift Operator Life Cycles web page on the Red Hat Customer Portal.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Navigate to the directory where you want to store the must-gather data.
2. Run the oc adm must-gather command with one or more --image or --image-stream
arguments.
NOTE
For information on gathering data about the Custom Metrics Autoscaler, see
the Additional resources section that follows.
For example, the following command gathers both the default cluster data and information
specific to OpenShift Virtualization:
$ oc adm must-gather \
--image-stream=openshift/must-gather \ 1
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.0 2
58
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
You can use the must-gather tool with additional arguments to gather data that is specifically
related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For
OpenShift Logging, run the following command:
├── cluster-logging
│ ├── clo
│ │ ├── cluster-logging-operator-74dd5994f-6ttgt
│ │ ├── clusterlogforwarder_cr
│ │ ├── cr
│ │ ├── csv
│ │ ├── deployment
│ │ └── logforwarding_cr
│ ├── collector
│ │ ├── fluentd-2tr64
│ ├── eo
│ │ ├── csv
│ │ ├── deployment
│ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4
│ ├── es
│ │ ├── cluster-elasticsearch
│ │ │ ├── aliases
│ │ │ ├── health
│ │ │ ├── indices
│ │ │ ├── latest_documents.json
│ │ │ ├── nodes
│ │ │ ├── nodes_stats.json
│ │ │ └── thread_pool
│ │ ├── cr
│ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
│ │ └── logs
│ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
│ ├── install
│ │ ├── co_logs
│ │ ├── install_plan
│ │ ├── olmo_logs
│ │ └── subscription
│ └── kibana
│ ├── cr
│ ├── kibana-9d69668d4-2rkvz
├── cluster-scoped-resources
│ └── core
│ ├── nodes
│ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml
│ └── persistentvolumes
│ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml
59
OpenShift Container Platform 4.17 Support
├── event-filter.html
├── gather-debug.log
└── namespaces
├── openshift-logging
│ ├── apps
│ │ ├── daemonsets.yaml
│ │ ├── deployments.yaml
│ │ ├── replicasets.yaml
│ │ └── statefulsets.yaml
│ ├── batch
│ │ ├── cronjobs.yaml
│ │ └── jobs.yaml
│ ├── core
│ │ ├── configmaps.yaml
│ │ ├── endpoints.yaml
│ │ ├── events
│ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml
│ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml
│ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml
│ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml
│ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml
│ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml
│ │ ├── events.yaml
│ │ ├── persistentvolumeclaims.yaml
│ │ ├── pods.yaml
│ │ ├── replicationcontrollers.yaml
│ │ ├── secrets.yaml
│ │ └── services.yaml
│ ├── openshift-logging.yaml
│ ├── pods
│ │ ├── cluster-logging-operator-74dd5994f-6ttgt
│ │ │ ├── cluster-logging-operator
│ │ │ │ └── cluster-logging-operator
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml
│ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff
│ │ │ ├── cluster-logging-operator-registry
│ │ │ │ └── cluster-logging-operator-registry
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml
│ │ │ └── mutate-csv-and-generate-sqlite-db
│ │ │ └── mutate-csv-and-generate-sqlite-db
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
│ │ ├── elasticsearch-im-app-1596030300-bpgcx
│ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml
│ │ │ └── indexmanagement
60
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
│ │ │ └── indexmanagement
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ │ ├── fluentd-2tr64
│ │ │ ├── fluentd
│ │ │ │ └── fluentd
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ ├── fluentd-2tr64.yaml
│ │ │ └── fluentd-init
│ │ │ └── fluentd-init
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ │ ├── kibana-9d69668d4-2rkvz
│ │ │ ├── kibana
│ │ │ │ └── kibana
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ ├── kibana-9d69668d4-2rkvz.yaml
│ │ │ └── kibana-proxy
│ │ │ └── kibana-proxy
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ └── route.openshift.io
│ └── routes.yaml
└── openshift-operators-redhat
├── ...
3. Run the oc adm must-gather command with one or more --image or --image-stream
arguments. For example, the following command gathers both the default cluster data and
information specific to KubeVirt:
$ oc adm must-gather \
--image-stream=openshift/must-gather \ 1
--image=quay.io/kubevirt/must-gather 2
4. Create a compressed file from the must-gather directory that was just created in your working
directory. For example, on a computer that uses a Linux operating system, run the following
command:
61
OpenShift Container Platform 4.17 Support
5. Attach the compressed file to your support case on the the Customer Support page of the
Red Hat Customer Portal.
Procedure
NOTE
By default, the must-gather tool collects the OVN nbdb and sbdb databases
from all of the nodes in the cluster. Adding the -- gather_network_logs option
to include additional logs that contain OVN-Kubernetes transactions for OVN
nbdb database.
2. Create a compressed file from the must-gather directory that was just created in your working
directory. For example, on a computer that uses a Linux operating system, run the following
command:
3. Attach the compressed file to your support case on the the Customer Support page of the
Red Hat Customer Portal.
If the container reaches the storage limit, an error message similar to the following example is
62
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
If the container reaches the storage limit, an error message similar to the following example is
generated.
Example output
Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting...
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
Run the oc adm must-gather command with the volume-percentage flag. The new value
cannot exceed 100.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have access to the web console or the OpenShift CLI (oc) installed.
Procedure
To open a support case and have your cluster ID autofilled using the web console:
a. From the toolbar, navigate to (?) Help and select Share Feedback from the list.
b. Click Open a support case from the Tell us about your experience window.
To obtain your cluster ID using the OpenShift CLI (oc), run the following command:
63
OpenShift Container Platform 4.17 Support
provides a standardized way to collect diagnostic information relating to a node, which can then be
provided to Red Hat Support for issue diagnosis.
In some support interactions, Red Hat Support may ask you to collect a sosreport archive for a specific
OpenShift Container Platform node. For example, it might sometimes be necessary to review system
logs or other node-specific data that is not included within the output of oc adm must-gather.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
$ oc get nodes
2. Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug:
$ oc debug node/my-cluster-node
To enter into a debug session on the target node that is tainted with the NoExecute effect, add
a toleration to a dummy namespace, and start the debug pod in the dummy namespace:
$ oc new-project dummy
$ oc debug node/my-cluster-node
3. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file
system in /host within the pod. By changing the root directory to /host, you can run binaries
contained in the host’s executable paths:
# chroot /host
64
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster
changes. Accessing cluster nodes by using SSH is not recommended. However, if
the OpenShift Container Platform API is not available, or the kubelet is not
properly functioning on the target node, oc operations will be impacted. In such
situations, it is possible to access nodes using ssh core@<node>.
<cluster_name>.<base_domain> instead.
4. Start a toolbox container, which includes the required binaries and plugins to run sosreport:
# toolbox
NOTE
a. Run the sos report command to collect necessary troubleshooting data on crio and
podman:
d. Provide the Red Hat Support case ID. sosreport adds the ID to the archive’s file name.
e. The sosreport output provides the archive’s location and checksum. The following sample
output references support case ID 01234567:
1 The sosreport archive’s file path is outside of the chroot environment because the
toolbox container mounts the host’s root directory at /host.
65
OpenShift Container Platform 4.17 Support
6. Provide the sosreport archive to Red Hat Support for analysis, using one of the following
methods.
Upload the file to an existing Red Hat support case directly from an OpenShift Container
Platform cluster.
a. From within the toolbox container, run redhat-support-tool to attach the archive
directly to an existing Red Hat support case. This example uses support case ID
01234567:
1 The toolbox container mounts the host’s root directory at /host. Reference the
absolute path from the toolbox container’s root directory, including /host/, when
specifying files to upload through the redhat-support-tool command.
1 The debug container mounts the host’s root directory at /host. Reference the
absolute path from the debug container’s root directory, including /host, when
specifying target files for concatenation.
NOTE
b. Navigate to an existing support case within the Customer Support page of the Red Hat
Customer Portal.
c. Select Attach files and follow the prompts to upload the file.
66
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
Prerequisites
You have the fully qualified domain name of the bootstrap node.
Procedure
1. Query bootkube.service journald unit logs from a bootstrap node during OpenShift Container
Platform installation. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified
domain name:
NOTE
2. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace
<bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:
$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs
$pod; done'
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The
following example queries control plane nodes only:
a. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists
67
OpenShift Container Platform 4.17 Support
a. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists
files in /var/log/openshift-apiserver/ on all control plane nodes:
b. Inspect a specific log within a /var/log/ subdirectory. The following example outputs
/var/log/openshift-apiserver/audit.log contents from all control plane nodes:
c. If the API is not functional, review the logs on each node using SSH instead. The following
example tails /var/log/openshift-apiserver/audit.log:
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
Before attempting to collect diagnostic data over SSH, review whether the
data collected by running oc adm must gather and other oc commands is
sufficient instead. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to access nodes
using ssh core@<node>.<cluster_name>.<base_domain>.
OpenShift Container Platform supports two ways of performing a network trace. Review the following
table and choose the method that meets your needs.
Collecting a host You perform a packet capture for a duration that you specify on one or more nodes at
network trace the same time. The packet capture files are transferred from nodes to the client
machine when the specified duration is met.
You can troubleshoot why a specific action triggers network communication issues. Run
the packet capture, perform the action that triggers the issue, and use the logs to
diagnose the issue.
68
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
Collecting a You perform a packet capture on one node or one container. You run the tcpdump
network trace command interactively, so you can control the duration of the packet capture.
from an OpenShift
Container You can start the packet capture manually, trigger the network communication issue,
Platform node or and then stop the packet capture manually.
container
This method uses the cat command and shell redirection to copy the packet capture
data from the node or container to the client machine.
You can use a combination of the oc adm must-gather command and the
registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from
nodes. Analyzing packet captures can help you troubleshoot network communication issues.
The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes.
The tcpdump command records the packet captures in the pods. When the tcpdump command exits,
the oc adm must-gather command transfers the files with the packet captures from the pods to your
client machine.
TIP
The sample command in the following procedure demonstrates performing a packet capture with the
tcpdump command. However, you can run any command in the container image that is specified in the -
-image argument to gather troubleshooting information from multiple nodes at the same time.
Prerequisites
You are logged in to OpenShift Container Platform as a user with the cluster-admin role.
Procedure
1. Run a packet capture from the host network on some nodes by running the following command:
$ oc adm must-gather \
--dest-dir /tmp/captures \ <.>
--source-dir '/tmp/tcpdump/' \ <.>
--image registry.redhat.io/openshift4/network-tools-rhel8:latest \ <.>
--node-selector 'node-role.kubernetes.io/worker' \ <.>
--host-network=true \ <.>
--timeout 30s \ <.>
-- \
tcpdump -i any \ <.>
-w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300
<.> The --dest-dir argument specifies that oc adm must-gather stores the packet captures in
69
OpenShift Container Platform 4.17 Support
<.> The --dest-dir argument specifies that oc adm must-gather stores the packet captures in
directories that are relative to /tmp/captures on the client machine. You can specify any
writable directory. <.> When tcpdump is run in the debug pod that oc adm must-gather starts,
the --source-dir argument specifies that the packet captures are temporarily stored in the
/tmp/tcpdump directory on the pod. <.> The --image argument specifies a container image that
includes the tcpdump command. <.> The --node-selector argument and example value
specifies to perform the packet captures on the worker nodes. As an alternative, you can specify
the --node-name argument instead to run the packet capture on a single node. If you omit both
the --node-selector and the --node-name argument, the packet captures are performed on all
nodes. <.> The --host-network=true argument is required so that the packet captures are
performed on the network interfaces of the node. <.> The --timeout argument and value specify
to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a
duration, the debug pod runs for 10 minutes. <.> The -i any argument for the tcpdump
command specifies to capture packets on all network interfaces. As an alternative, you can
specify a network interface name.
2. Perform the action, such as accessing a web application, that triggers the network
communication issue while the network trace captures packets.
3. Review the packet capture files that oc adm must-gather transferred from the pods to your
client machine:
tmp/captures
├── event-filter.html
├── ip-10-0-192-217-ec2-internal 1
│ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca...
│ └── 2022-01-13T19:31:31.pcap
├── ip-10-0-201-178-ec2-internal 2
│ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca...
│ └── 2022-01-13T19:31:30.pcap
├── ip-...
└── timestamp
1 2 The packet captures are stored in directories that identify the hostname, container, and
file name. If you did not specify the --node-selector argument, then the directory level for
the hostname is not present.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
70
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
Procedure
$ oc get nodes
2. Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug:
$ oc debug node/my-cluster-node
3. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file
system in /host within the pod. By changing the root directory to /host, you can run binaries
contained in the host’s executable paths:
# chroot /host
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster
changes. Accessing cluster nodes by using SSH is not recommended. However, if
the OpenShift Container Platform API is not available, or the kubelet is not
properly functioning on the target node, oc operations will be impacted. In such
situations, it is possible to access nodes using ssh core@<node>.
<cluster_name>.<base_domain> instead.
4. From within the chroot environment console, obtain the node’s interface names:
# ip ad
5. Start a toolbox container, which includes the required binaries and plugins to run sosreport:
# toolbox
NOTE
6. Initiate a tcpdump session on the cluster node and redirect output to a capture file. This
example uses ens5 as the interface name:
71
OpenShift Container Platform 4.17 Support
1 The tcpdump capture file’s path is outside of the chroot environment because the
toolbox container mounts the host’s root directory at /host.
7. If a tcpdump capture is required for a specific container on the node, follow these steps.
a. Determine the target container ID. The chroot host command precedes the crictl
command in this step because the toolbox container mounts the host’s root directory at
/host:
b. Determine the container’s process ID. In this example, the container ID is a7fe32346b120:
# chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'
c. Initiate a tcpdump session on the container and redirect output to a capture file. This
example uses 49628 as the container’s process ID and ens5 as the interface name. The
nsenter command enters the namespace of a target process and runs a command in its
namespace. because the target process in this example is a container’s process ID, the
tcpdump command is run in the container’s namespace from the host:
1 The tcpdump capture file’s path is outside of the chroot environment because the
toolbox container mounts the host’s root directory at /host.
8. Provide the tcpdump capture file to Red Hat Support for analysis, using one of the following
methods.
Upload the file to an existing Red Hat support case directly from an OpenShift Container
Platform cluster.
a. From within the toolbox container, run redhat-support-tool to attach the file directly to
an existing Red Hat Support case. This example uses support case ID 01234567:
1 The toolbox container mounts the host’s root directory at /host. Reference the
absolute path from the toolbox container’s root directory, including /host/, when
specifying files to upload through the redhat-support-tool command.
72
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
1 The debug container mounts the host’s root directory at /host. Reference the
absolute path from the debug container’s root directory, including /host, when
NOTE
b. Navigate to an existing support case within the Customer Support page of the Red Hat
Customer Portal.
c. Select Attach files and follow the prompts to upload the file.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer
Portal.
73
OpenShift Container Platform 4.17 Support
1 The debug container mounts the host’s root directory at /host. Reference the absolute
path from the debug container’s root directory, including /host, when specifying target
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Transferring files from a cluster node by using scp is not
recommended. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to copy
diagnostic files from a node by running scp core@<node>.
<cluster_name>.<base_domain>:<file_path> <local_path>.
2. Navigate to an existing support case within the Customer Support page of the Red Hat
Customer Portal.
3. Select Attach files and follow the prompts to upload the file.
The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red
Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an
image that is an alternative to the standard support tools image.
Prerequisites
Procedure
1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file
system in /host within the pod. By changing the root directory to /host, you can run binaries
contained in the host’s executable paths:
# chroot /host
# toolbox
74
CHAPTER 5. GATHERING DATA ABOUT YOUR CLUSTER
Prerequisites
Procedure
1. Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file
system in /host within the pod. By changing the root directory to /host, you can run binaries
contained in the host’s executable paths:
# chroot /host
2. Create a .toolboxrc file in the home directory for the root user ID:
# vi ~/.toolboxrc
REGISTRY=quay.io 1
IMAGE=fedora/fedora:33-x86_64 2
TOOLBOX_NAME=toolbox-fedora-33 3
# toolbox
NOTE
75
OpenShift Container Platform 4.17 Support
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
$ oc get clusterversion
Example output
2. Obtain a detailed summary of cluster specifications, update availability, and update history:
$ oc describe clusterversion
Example output
Name: version
Namespace:
Labels: <none>
Annotations: <none>
API Version: config.openshift.io/v1
Kind: ClusterVersion
# ...
Image: quay.io/openshift-release-dev/ocp-
release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce
URL: https://access.redhat.com/errata/RHSA-2023:4456
Version: 4.13.8
History:
Completion Time: 2023-08-17T13:20:21Z
Image: quay.io/openshift-release-dev/ocp-
release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce
76
CHAPTER 6. SUMMARIZING CLUSTER SPECIFICATIONS
Verified: false
Version: 4.13.8
# ...
77
OpenShift Container Platform 4.17 Support
CHAPTER 7. TROUBLESHOOTING
2. The bootstrap machine boots and starts hosting the remote resources required for the control
plane machines to boot.
3. The control plane machines fetch the remote resources from the bootstrap machine and finish
booting.
4. The control plane machines use the bootstrap machine to form an etcd cluster.
5. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster.
6. The temporary control plane schedules the production control plane to the control plane
machines.
7. The temporary control plane shuts down and passes control to the production control plane.
8. The bootstrap machine adds OpenShift Container Platform components into the production
control plane.
11. The control plane installs additional services in the form of a set of Operators.
12. The cluster downloads and configures remaining components needed for the day-to-day
operation, including the creation of worker machines in supported environments.
You can alternatively install OpenShift Container Platform 4.17 on infrastructure that you provide. If you
use this installation method, follow user-provisioned infrastructure installation documentation carefully.
Additionally, review the following considerations before the installation:
Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat
Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or
virtualization technology.
Many virtualization and cloud environments require agents to be installed on guest operating
78
CHAPTER 7. TROUBLESHOOTING
Many virtualization and cloud environments require agents to be installed on guest operating
systems. Ensure that these agents are installed as a containerized workload deployed through a
daemon set.
Install cloud provider integration if you want to enable features such as dynamic storage, on-
demand service routing, node hostname to Kubernetes hostname resolution, and cluster
autoscaling.
NOTE
A provider-specific Machine API implementation is required if you want to use machine sets or
autoscaling to automatically provision OpenShift Container Platform cluster nodes.
Check whether your chosen cloud provider offers a method to inject Ignition configuration files
into hosts as part of their initial deployment. If they do not, you will need to host Ignition
configuration files by using an HTTP server. The steps taken to troubleshoot Ignition
configuration file issues will differ depending on which of these two methods is deployed.
A load balancer is required to distribute API requests across all control plane nodes in highly
available OpenShift Container Platform environments. You can use any TCP-based load
balancing solution that meets OpenShift Container Platform DNS routing and port
requirements.
Prerequisites
You have configured an external load balancer of your choosing, in preparation for an OpenShift
Container Platform installation. The following example is based on a Red Hat Enterprise Linux
(RHEL) host using HAProxy to provide load balancing services to a cluster.
You have configured DNS in preparation for an OpenShift Container Platform installation.
Procedure
2. Verify that the load balancer is listening on the required ports. The following example
79
OpenShift Container Platform 4.17 Support
2. Verify that the load balancer is listening on the required ports. The following example
references ports 80, 443, 6443, and 22623.
For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by
using the netstat command:
For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port
status by using the ss command:
NOTE
3. Check that the wildcard DNS record resolves to the load balancer:
Prerequisites
Procedure
Set the installation log level to debug when initiating the installation:
The installation has been initiated within 24 hours of Ignition configuration file creation. The
Ignition files are created when the following command is run:
80
CHAPTER 7. TROUBLESHOOTING
The install-config.yaml file is in the same directory as the installer. If an alternative installation
path is declared by using the ./openshift-install --dir option, verify that the install-config.yaml
file exists within that directory.
Prerequisites
You have access to the cluster as a user with the cluster-admin cluster role.
You have the fully qualified domain names of the bootstrap and control plane nodes.
NOTE
Procedure
$ tail -f ~/<installation_directory>/.openshift_install.log
2. Monitor the bootkube.service journald unit log on the bootstrap node, after it has booted. This
provides visibility into the bootstrapping of the first control plane. Replace <bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:
NOTE
3. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This
provides visibility into control plane node agent activity.
b. If the API is not functional, review the logs using SSH instead. Replace <master-node>.
<cluster_name>.<base_domain> with appropriate values:
81
OpenShift Container Platform 4.17 Support
4. Monitor crio.service journald unit logs on control plane nodes, after they have booted. This
provides visibility into control plane node CRI-O container runtime activity.
b. If the API is not functional, review the logs using SSH instead. Replace <master-node>.
<cluster_name>.<base_domain> with appropriate values:
Prerequisites
You have the fully qualified domain name of the bootstrap node.
If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP
server’s fully qualified domain name and the port number. You must also have SSH access to
the HTTP host.
Procedure
1. If you have access to the bootstrap node’s console, monitor the console until the node reaches
the login prompt.
a. Verify the bootstrap node Ignition file URL. Replace <http_server_fqdn> with HTTP
server’s fully qualified domain name:
$ curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1
1 The -I option returns the header only. If the Ignition file is available on the specified
URL, the command returns 200 OK status. If it is not available, the command
returns 404 file not found.
b. To verify that the Ignition file was received by the bootstrap node, query the HTTP
server logs on the serving host. For example, if you are using an Apache web server to
serve Ignition files, enter the following command:
If the bootstrap Ignition file is received, the associated HTTP GET log message will
82
CHAPTER 7. TROUBLESHOOTING
If the bootstrap Ignition file is received, the associated HTTP GET log message will
include a 200 OK success status, indicating that the request succeeded.
c. If the Ignition file was not received, check that the Ignition files exist and that they have
the appropriate file and web server permissions on the serving host directly.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as
part of their initial deployment.
a. Review the bootstrap node’s console to determine if the mechanism is injecting the
bootstrap node Ignition file correctly.
4. Verify that the bootstrap node has been assigned an IP address from the DHCP server.
5. Collect bootkube.service journald unit logs from the bootstrap node. Replace
<bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:
NOTE
a. Collect the logs using podman on the bootstrap node. Replace <bootstrap_fqdn> with the
bootstrap node’s fully qualified domain name:
The load balancer proxies port 6443 connections to bootstrap and control plane nodes.
Ensure that the proxy configuration meets OpenShift Container Platform installation
requirements.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
83
OpenShift Container Platform 4.17 Support
You have the fully qualified domain names of the bootstrap and control plane nodes.
If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP
server’s fully qualified domain name and the port number. You must also have SSH access to
the HTTP host.
NOTE
Procedure
1. If you have access to the console for the control plane node, monitor the console until the node
reaches the login prompt. During the installation, Ignition log messages are output to the
console.
a. Verify the control plane node Ignition file URL. Replace <http_server_fqdn> with HTTP
server’s fully qualified domain name:
$ curl -I http://<http_server_fqdn>:<port>/master.ign 1
1 The -I option returns the header only. If the Ignition file is available on the specified
URL, the command returns 200 OK status. If it is not available, the command
returns 404 file not found.
b. To verify that the Ignition file was received by the control plane node query the HTTP
server logs on the serving host. For example, if you are using an Apache web server to
serve Ignition files:
If the master Ignition file is received, the associated HTTP GET log message will include
a 200 OK success status, indicating that the request succeeded.
c. If the Ignition file was not received, check that it exists on the serving host directly.
Ensure that the appropriate file and web server permissions are in place.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as
part of their initial deployment.
a. Review the console for the control plane node to determine if the mechanism is
injecting the control plane node Ignition file correctly.
3. Check the availability of the storage device assigned to the control plane node.
4. Verify that the control plane node has been assigned an IP address from the DHCP server.
84
CHAPTER 7. TROUBLESHOOTING
$ oc get nodes
b. If one of the control plane nodes does not reach a Ready status, retrieve a detailed node
description:
NOTE
b. If those resources are listed as Not found, review pods in the openshift-ovn-kubernetes
namespace:
c. Review logs relating to failed OpenShift Container Platform OVN-Kubernetes pods in the
openshift-ovn-kubernetes namespace:
b. If the installer failed to create the network configuration, generate the Kubernetes
manifests again and review message output:
85
OpenShift Container Platform 4.17 Support
8. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This
provides visibility into control plane node agent activity.
b. If the API is not functional, review the logs using SSH instead. Replace <master-node>.
<cluster_name>.<base_domain> with appropriate values:
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
Before attempting to collect diagnostic data over SSH, review whether the
data collected by running oc adm must gather and other oc commands is
sufficient instead. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to access nodes
using ssh core@<node>.<cluster_name>.<base_domain>.
9. Retrieve crio.service journald unit logs on control plane nodes, after they have booted. This
provides visibility into control plane node CRI-O container runtime activity.
b. If the API is not functional, review the logs using SSH instead:
10. Collect logs from specific subdirectories under /var/log/ on control plane nodes.
a. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists
files in /var/log/openshift-apiserver/ on all control plane nodes:
b. Inspect a specific log within a /var/log/ subdirectory. The following example outputs
/var/log/openshift-apiserver/audit.log contents from all control plane nodes:
c. If the API is not functional, review the logs on each node using SSH instead. The following
example tails /var/log/openshift-apiserver/audit.log:
86
CHAPTER 7. TROUBLESHOOTING
12. If you experience control plane node configuration issues, verify that the MCO, MCO endpoint,
and DNS record are functioning. The Machine Config Operator (MCO) manages operating
system configuration during the installation procedure. Also verify system clock accuracy and
certificate validity.
a. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate
values:
$ curl https://api-int.<cluster_name>:22623/config/master
b. If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint
is configured to run on port 22623.
c. Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.
ii. Run a reverse lookup to the assigned MCO IP address on the load balancer:
d. Verify that the MCO is functioning from the bootstrap node directly. Replace
<bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:
e. System clock time must be synchronized between bootstrap, master, and worker nodes.
Check each node’s system clock reference time and time synchronization statistics:
87
OpenShift Container Platform 4.17 Support
If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod
logs. You can also verify etcd DNS records and check DNS availability on control plane nodes.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have the fully qualified domain names of the control plane nodes.
Procedure
2. If any of the pods listed by the previous commands are not showing a Running or a Completed
status, gather diagnostic information for the pod.
c. If the pod has more than one container, the preceding command will create an error, and the
container names will be provided in the error message. Inspect logs for each container:
3. If the API is not functional, review etcd pod and container logs on each control plane node by
using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with
appropriate values.
b. For any pods not showing Ready status, inspect pod status in detail. Replace <pod_id>
with the pod’s ID listed in the output of the preceding command:
88
CHAPTER 7. TROUBLESHOOTING
d. For any containers not showing Ready status, inspect container status in detail. Replace
<container_id> with container IDs listed in the output of the preceding command:
e. Review the logs for any containers not showing a Ready status. Replace <container_id>
with the container IDs listed in the output of the preceding command:
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
Before attempting to collect diagnostic data over SSH, review whether the
data collected by running oc adm must gather and other oc commands is
sufficient instead. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to access nodes
using ssh core@<node>.<cluster_name>.<base_domain>.
4. Validate primary and secondary DNS server connectivity from control plane nodes.
7.1.10. Investigating control plane node kubelet and API server issues
To investigate control plane node kubelet and API server issues during installation, check DNS, DHCP,
and load balancer functionality. Also, verify that certificates have not expired.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have the fully qualified domain names of the control plane nodes.
Procedure
1. Verify that the API server’s DNS record directs the kubelet on control plane nodes to
89
OpenShift Container Platform 4.17 Support
1. Verify that the API server’s DNS record directs the kubelet on control plane nodes to
https://api-int.<cluster_name>.<base_domain>:6443. Ensure that the record references the
load balancer.
2. Ensure that the load balancer’s port 6443 definition references each control plane node.
3. Check that unique control plane node hostnames have been provided by DHCP.
4. Inspect the kubelet.service journald unit logs on each control plane node.
b. If the API is not functional, review the logs using SSH instead. Replace <master-node>.
<cluster_name>.<base_domain> with appropriate values:
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
Before attempting to collect diagnostic data over SSH, review whether the
data collected by running oc adm must gather and other oc commands is
sufficient instead. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to access nodes
using ssh core@<node>.<cluster_name>.<base_domain>.
5. Check for certificate expiration messages in the control plane node kubelet logs.
$ oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'
b. If the API is not functional, review the logs using SSH instead. Replace <master-node>.
<cluster_name>.<base_domain> with appropriate values:
Prerequisites
90
CHAPTER 7. TROUBLESHOOTING
You have access to the cluster as a user with the cluster-admin role.
You have the fully qualified domain names of the bootstrap and worker nodes.
If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP
server’s fully qualified domain name and the port number. You must also have SSH access to
the HTTP host.
NOTE
Procedure
1. If you have access to the worker node’s console, monitor the console until the node reaches the
login prompt. During the installation, Ignition log messages are output to the console.
a. Verify the worker node Ignition file URL. Replace <http_server_fqdn> with HTTP
server’s fully qualified domain name:
$ curl -I http://<http_server_fqdn>:<port>/worker.ign 1
1 The -I option returns the header only. If the Ignition file is available on the specified
URL, the command returns 200 OK status. If it is not available, the command
returns 404 file not found.
b. To verify that the Ignition file was received by the worker node, query the HTTP server
logs on the HTTP host. For example, if you are using an Apache web server to serve
Ignition files:
If the worker Ignition file is received, the associated HTTP GET log message will include
a 200 OK success status, indicating that the request succeeded.
c. If the Ignition file was not received, check that it exists on the serving host directly.
Ensure that the appropriate file and web server permissions are in place.
If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as
part of their initial deployment.
a. Review the worker node’s console to determine if the mechanism is injecting the worker
node Ignition file correctly.
91
OpenShift Container Platform 4.17 Support
4. Verify that the worker node has been assigned an IP address from the DHCP server.
$ oc get nodes
b. Retrieve a detailed node description for any worker nodes not showing a Ready status:
NOTE
6. Unlike control plane nodes, worker nodes are deployed and scaled using the Machine API
Operator. Check the status of the Machine API Operator.
b. If the Machine API Operator pod does not have a Ready status, detail the pod’s events:
c. Inspect machine-api-operator container logs. The container runs within the machine-api-
operator pod:
d. Also inspect kube-rbac-proxy container logs. The container also runs within the machine-
api-operator pod:
7. Monitor kubelet.service journald unit logs on worker nodes, after they have booted. This
provides visibility into worker node agent activity.
b. If the API is not functional, review the logs using SSH instead. Replace <worker-node>.
<cluster_name>.<base_domain> with appropriate values:
92
CHAPTER 7. TROUBLESHOOTING
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
Before attempting to collect diagnostic data over SSH, review whether the
data collected by running oc adm must gather and other oc commands is
sufficient instead. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to access nodes
using ssh core@<node>.<cluster_name>.<base_domain>.
8. Retrieve crio.service journald unit logs on worker nodes, after they have booted. This provides
visibility into worker node CRI-O container runtime activity.
b. If the API is not functional, review the logs using SSH instead:
a. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists
files in /var/log/sssd/ on all worker nodes:
b. Inspect a specific log within a /var/log/ subdirectory. The following example outputs
/var/log/sssd/audit.log contents from all worker nodes:
c. If the API is not functional, review the logs on each node using SSH instead. The following
example tails /var/log/sssd/sssd.log:
93
OpenShift Container Platform 4.17 Support
11. If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and
DNS record are functioning. The Machine Config Operator (MCO) manages operating system
configuration during the installation procedure. Also verify system clock accuracy and
certificate validity.
a. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate
values:
$ curl https://api-int.<cluster_name>:22623/config/worker
b. If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint
is configured to run on port 22623.
c. Verify that the MCO endpoint’s DNS record is configured and resolves to the load balancer.
ii. Run a reverse lookup to the assigned MCO IP address on the load balancer:
d. Verify that the MCO is functioning from the bootstrap node directly. Replace
<bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:
e. System clock time must be synchronized between bootstrap, master, and worker nodes.
Check each node’s system clock reference time and time synchronization statistics:
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Check that cluster Operators are all available at the end of an installation.
94
CHAPTER 7. TROUBLESHOOTING
$ oc get clusteroperators
2. Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes
might not move to a Ready status and some cluster Operators might not become available if
there are pending CSRs.
a. Check the status of the CSRs and ensure that you see a client and server request with the
Pending or Approved status for each machine that you added to the cluster:
$ oc get csr
Example output
In this example, two machines are joining the cluster. You might see more approved CSRs in
the list.
b. If the CSRs were not approved, after all of the pending CSRs for the machines you added
are in Pending status, approve the CSRs for your cluster machines:
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour,
the certificates will rotate, and more than two certificates will be present for
each node. You must approve all of these certificates. After you approve the
initial CSRs, the subsequent node client CSRs are automatically approved by
the cluster kube-controller-manager.
NOTE
95
OpenShift Container Platform 4.17 Support
NOTE
For clusters running on platforms that are not machine API enabled, such as
bare metal and other user-provisioned infrastructure, you must implement a
method of automatically approving the kubelet serving certificate requests
(CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs
commands cannot succeed, because a serving certificate is required when
the API server connects to the kubelet. Any operation that contacts the
Kubelet endpoint requires this certificate approval to be in place. The
method must watch for new CSRs, confirm that the CSR was submitted by
the node-bootstrapper service account in the system:node or
system:admin groups, and confirm the identity of the node.
To approve them individually, run the following command for each valid CSR:
5. Obtain a detailed description for pods that do not have Running status:
7. When experiencing pod base image related issues, review base image status.
96
CHAPTER 7. TROUBLESHOOTING
NOTE
You use a different command to gather logs about an unsuccessful installation than to
gather logs from a running cluster. If you must gather logs from a running cluster, use the
oc adm must-gather command.
Prerequisites
Your OpenShift Container Platform installation failed before the bootstrap process finished.
The bootstrap node is running and accessible through SSH.
The ssh-agent process is active on your computer, and you provided the same SSH key to both
the ssh-agent process and the installation program.
If you tried to install a cluster on infrastructure that you provisioned, you must have the fully
qualified domain names of the bootstrap and control plane nodes.
Procedure
1. Generate the commands that are required to obtain the installation logs from the bootstrap and
control plane machines:
If you used installer-provisioned infrastructure, change to the directory that contains the
installation program and run the following command:
If you used infrastructure that you provisioned yourself, change to the directory that
contains the installation program and run the following command:
1 For installation_directory, specify the same directory you specified when you ran
./openshift-install create cluster. This directory contains the OpenShift Container
Platform definition files that the installation program creates.
97
OpenShift Container Platform 4.17 Support
NOTE
A default cluster contains three control plane machines. List all of your
control plane machines as shown, no matter how many your cluster uses.
Example output
If you open a Red Hat support case about your installation failure, include the compressed logs
in the case.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
List the name, status, and role for all nodes in the cluster:
$ oc get nodes
Summarize CPU and memory usage for each node within the cluster:
98
CHAPTER 7. TROUBLESHOOTING
You can review cluster node health status, resource consumption statistics, and node logs. Additionally,
you can query kubelet status on individual nodes.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. The kubelet is managed using a systemd service on each node. Review the kubelet’s status by
querying the kubelet systemd service within a debug pod.
$ oc debug node/my-node
NOTE
If you are running oc debug on a control plane node, you can find
administrative kubeconfig files in the /etc/kubernetes/static-pod-
resources/kube-apiserver-certs/secrets/node-kubeconfigs directory.
b. Set /host as the root directory within the debug shell. The debug pod mounts the host’s
root file system in /host within the pod. By changing the root directory to /host, you can run
binaries contained in the host’s executable paths:
# chroot /host
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
However, if the OpenShift Container Platform API is not available, or kubelet
is not properly functioning on the target node, oc operations will be
impacted. In such situations, it is possible to access nodes using ssh
core@<node>.<cluster_name>.<base_domain> instead.
99
OpenShift Container Platform 4.17 Support
You can gather journald unit logs and other logs within /var/log on individual cluster nodes.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The
following example queries control plane nodes only:
a. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists
files in /var/log/openshift-apiserver/ on all control plane nodes:
b. Inspect a specific log within a /var/log/ subdirectory. The following example outputs
/var/log/openshift-apiserver/audit.log contents from all control plane nodes:
c. If the API is not functional, review the logs on each node using SSH instead. The following
example tails /var/log/openshift-apiserver/audit.log:
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
Before attempting to collect diagnostic data over SSH, review whether the
data collected by running oc adm must gather and other oc commands is
sufficient instead. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to access nodes
using ssh core@<node>.<cluster_name>.<base_domain>.
100
CHAPTER 7. TROUBLESHOOTING
When container runtime issues occur, verify the status of the crio systemd service on each node. Gather
CRI-O journald unit logs from nodes that have container runtime issues.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Review CRI-O status by querying the crio systemd service on a node, within a debug pod.
$ oc debug node/my-node
b. Set /host as the root directory within the debug shell. The debug pod mounts the host’s
root file system in /host within the pod. By changing the root directory to /host, you can run
binaries contained in the host’s executable paths:
# chroot /host
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
However, if the OpenShift Container Platform API is not available, or the
kubelet is not properly functioning on the target node, oc operations will be
impacted. In such situations, it is possible to access nodes using ssh
core@<node>.<cluster_name>.<base_domain> instead.
101
OpenShift Container Platform 4.17 Support
If you experience CRI-O issues, you can obtain CRI-O journald unit logs from a node.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have the fully qualified domain names of the control plane or control plane machines.
Procedure
1. Gather CRI-O journald unit logs. The following example collects logs from all control plane
nodes (within the cluster:
3. If the API is not functional, review the logs using SSH instead. Replace <node>.
<cluster_name>.<base_domain> with appropriate values:
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster
changes. Accessing cluster nodes by using SSH is not recommended. Before
attempting to collect diagnostic data over SSH, review whether the data
collected by running oc adm must gather and other oc commands is sufficient
instead. However, if the OpenShift Container Platform API is not available, or the
kubelet is not properly functioning on the target node, oc operations will be
impacted. In such situations, it is possible to access nodes using ssh
core@<node>.<cluster_name>.<base_domain>.
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container
XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open
/var/lib/containers/storage/overlay/XXX/link: no such file or directory
You cannot create a new container on a working node and the “can’t stat lower layer” error
appears:
102
CHAPTER 7. TROUBLESHOOTING
can't stat lower layer ... because it does not exist. Going through storage to recreate the
missing symlinks.
Your node is in the NotReady state after a cluster upgrade or if you attempt to reboot it.
You are unable to start a debug shell on the node using oc debug node/<node_name>
because the container runtime instance (crio) is not working.
Follow this process to completely wipe the CRI-O storage and resolve the errors.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Use cordon on the node. This is to avoid any workload getting scheduled if the node gets into
the Ready status. You will know that scheduling is disabled when SchedulingDisabled is in your
Status section:
NOTE
3. When the node returns, connect back to the node via SSH or Console. Then connect to the root
user:
$ ssh [email protected]
$ sudo -i
a. Use the following command to stop the pods that are not in the HostNetwork. They must
be removed first because their removal relies on the networking plugin pods, which are in
the HostNetwork.
103
OpenShift Container Platform 4.17 Support
7. After you run those commands, you can completely wipe the ephemeral storage:
# crio wipe -f
9. You will know if the clean up worked if the crio and kubelet services are started, and the node is
in the Ready status:
$ oc get nodes
Example output
10. Mark the node schedulable. You will know that the scheduling is enabled when
SchedulingDisabled is no longer in status:
Example output
The x86_64 architecture supports kdump in General Availability (GA) status, whereas other
104
CHAPTER 7. TROUBLESHOOTING
The x86_64 architecture supports kdump in General Availability (GA) status, whereas other
architectures support kdump in Technology Preview (TP) status.
The following table provides details about the support level of kdump for different architectures.
x86_64 GA
aarch64 TP
s390x TP
ppc64le TP
IMPORTANT
Kdump support, for the preceding three architectures in the table, is a Technology
Preview feature only. Technology Preview features are not supported with Red Hat
production service level agreements (SLAs) and might not be functionally complete. Red
Hat does not recommend using them in production. These features provide early access
to upcoming product features, enabling customers to test functionality and provide
feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features,
see Technology Preview Features Support Scope .
RHCOS ships with the kexec-tools package, but manual configuration is required to enable the kdump
service.
Procedure
Perform the following steps to enable kdump on RHCOS.
1. To reserve memory for the crash kernel during the first kernel booting, provide kernel
arguments by entering the following command:
NOTE
2. Optional: To write the crash dump over the network or to some other location, rather than to the
105
OpenShift Container Platform 4.17 Support
2. Optional: To write the crash dump over the network or to some other location, rather than to the
default local /var/crash location, edit the /etc/kdump.conf configuration file.
NOTE
If your node uses LUKS-encrypted devices, you must use network dumps as
kdump does not support saving crash dumps to LUKS-encrypted devices.
For details on configuring the kdump service, see the comments in /etc/sysconfig/kdump,
/etc/kdump.conf, and the kdump.conf manual page. Also refer to the RHEL kdump
documentation for further information on configuring the dump target.
IMPORTANT
If you have multipathing enabled on your primary disk, the dump target must be
either an NFS or SSH server and you must exclude the multipath module from
your /etc/kdump.conf configuration file.
# systemctl reboot
5. Ensure that kdump has loaded a crash kernel by checking that the kdump.service systemd
service has started and exited successfully and that the command, cat
/sys/kernel/kexec_crash_loaded, prints the value 1.
The kdump service is intended to be enabled per node to debug kernel problems. Because there are
costs to having kdump enabled, and these costs accumulate with each additional kdump-enabled node,
it is recommended that the kdump service only be enabled on each node as needed. Potential costs of
enabling the kdump service on each node include:
Less available RAM due to memory being reserved for the crash kernel.
If you are aware of the downsides and trade-offs of having the kdump service enabled, it is possible to
enable kdump in a cluster-wide fashion. Although machine-specific machine configs are not yet
supported, you can use a systemd unit in a MachineConfig object as a day-1 customization and have
kdump enabled on all nodes in the cluster. You can create a MachineConfig object and inject that
object into the set of manifest files used by Ignition during cluster setup.
NOTE
See "Customizing nodes" in the Installing → Installation configuration section for more
information and examples on how to use Ignition configs.
106
CHAPTER 7. TROUBLESHOOTING
Procedure
Create a MachineConfig object for cluster-wide configuration:
1. Create a Butane config file, 99-worker-kdump.bu, that configures and enables kdump:
variant: openshift
version: 4.17.0
metadata:
name: 99-worker-kdump 1
labels:
machineconfiguration.openshift.io/role: worker 2
openshift:
kernel_arguments: 3
- crashkernel=256M
storage:
files:
- path: /etc/kdump.conf 4
mode: 0644
overwrite: true
contents:
inline: |
path /var/crash
core_collector makedumpfile -l --message-level 7 -d 31
- path: /etc/sysconfig/kdump 5
mode: 0644
overwrite: true
contents:
inline: |
KDUMP_COMMANDLINE_REMOVE="hugepages hugepagesz slub_debug quiet
log_buf_len swiotlb"
KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices
cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail
acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable" 6
KEXEC_ARGS="-s"
KDUMP_IMG="vmlinuz"
systemd:
units:
- name: kdump.service
enabled: true
1 2 Replace worker with master in both locations when creating a MachineConfig object for
control plane nodes.
3 Provide kernel arguments to reserve memory for the crash kernel. You can add other
kernel arguments if necessary. For the ppc64le platform, the recommended value for
crashkernel is crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-
128G:2G,128G-:4G.
4 If you want to change the contents of /etc/kdump.conf from the default, include this
section and modify the inline subsection accordingly.
5 If you want to change the contents of /etc/sysconfig/kdump from the default, include this
section and modify the inline subsection accordingly.
107
OpenShift Container Platform 4.17 Support
6 For the ppc64le platform, replace nr_cpus=1 with maxcpus=1, which is not supported on
this platform.
NOTE
To export the dumps to NFS targets, the nfs kernel module must be explicitly added to
the configuration file:
nfs server.example.com:/export/cores
core_collector makedumpfile -l --message-level 7 -d 31
extra_modules nfs
1. Use Butane to generate a machine config YAML file, 99-worker-kdump.yaml, containing the
configuration to be delivered to the nodes:
2. Put the YAML file into the <installation_directory>/manifests/ directory during cluster setup.
You can also create this MachineConfig object after cluster setup with the YAML file:
$ oc create -f 99-worker-kdump.yaml
See the Testing the kdump configuration section in the RHEL documentation for kdump.
See the Analyzing a core dump section in the RHEL documentation for kdump.
NOTE
Additional resources
kdump.conf(5) — a manual page for the /etc/kdump.conf configuration file containing the full
documentation of available options
If a machine cannot be provisioned, Ignition fails and RHCOS will boot into the emergency shell. Use the
108
CHAPTER 7. TROUBLESHOOTING
If a machine cannot be provisioned, Ignition fails and RHCOS will boot into the emergency shell. Use the
following procedure to get debugging information.
Procedure
$ systemctl --failed
2. Optional: Run the following command on an individual service unit to find out more information:
$ journalctl -u <unit>.service
After the nodeip-configuration.service service determines the correct NIC, the service creates the
/etc/systemd/system/kubelet.service.d/20-nodenet.conf file. The 20-nodenet.conf file sets the
KUBELET_NODE_IP environment variable to the IP address that the service selected.
When the kubelet service starts, it reads the value of the environment variable from the 20-
nodenet.conf file and sets the IP address as the value of the --node-ip kubelet command-line
argument. As a result, the kubelet service uses the selected IP address as the node IP address.
If hardware or networking is reconfigured after installation, or if there is a networking layout where the
node IP should not come from the default route interface, it is possible for the nodeip-
configuration.service service to select a different NIC after a reboot. In some cases, you might be able
to detect that a different NIC is selected by reviewing the INTERNAL-IP column in the output from the
oc get nodes -o wide command.
If network communication is disrupted or misconfigured because a different NIC is selected, you might
receive the following error: EtcdCertSignerControllerDegraded. You can create a hint file that includes
the NODEIP_HINT variable to override the default IP selection logic. For more information, see
Optional: Overriding the default node IP selection logic.
To override the default IP selection logic, you can create a hint file that includes the NODEIP_HINT
variable to override the default IP selection logic. Creating a hint file allows you to select a specific node
IP address from the interface in the subnet of the IP address specified in the NODEIP_HINT variable.
For example, if a node has two interfaces, eth0 with an address of 10.0.0.10/24, and eth1 with an
address of 192.0.2.5/24, and the default route points to eth0 (10.0.0.10),the node IP address would
normally use the 10.0.0.10 IP address.
Users can configure the NODEIP_HINT variable to point at a known IP in the subnet, for example, a
109
OpenShift Container Platform 4.17 Support
Users can configure the NODEIP_HINT variable to point at a known IP in the subnet, for example, a
subnet gateway such as 192.0.2.1 so that the other subnet, 192.0.2.0/24, is selected. As a result, the
192.0.2.5 IP address on eth1 is used for the node.
The following procedure shows how to override the default node IP selection logic.
Procedure
NODEIP_HINT=192.0.2.1
IMPORTANT
Do not use the exact IP address of a node as a hint, for example, 192.0.2.5.
Using the exact IP address of a node causes the node using the hint IP
address to fail to configure correctly.
The IP address in the hint file is only used to determine the correct subnet. It
will not receive traffic as a result of appearing in the hint file.
Example output
Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==
3. Activate the hint by creating a machine config manifest for both master and worker roles
before deploying the cluster:
99-nodeip-hint-master.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-nodeip-hint-master
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,<encoded_content> 1
mode: 0644
overwrite: true
path: /etc/default/nodeip-configuration
99-nodeip-hint-worker.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-nodeip-hint-worker
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,<encoded_content> 1
mode: 0644
overwrite: true
path: /etc/default/nodeip-configuration
4. Save the manifest to the directory where you store your cluster configuration, for example,
~/clusterconfigs.
You can create an additional or secondary Open vSwitch (OVS) bridge, br-ex1, that OVN-Kubernetes
manages and the Multiple External Gateways (MEG) implementation uses for defining external
gateways for an OpenShift Container Platform node. You can define a MEG in an
AdminPolicyBasedExternalRoute custom resource (CR). The MEG implementation provides a pod
with access to multiple gateways, equal-cost multipath (ECMP) routes, and the Bidirectional Forwarding
Detection (BFD) implementation.
Consider a use case for pods impacted by the Multiple External Gateways (MEG) feature and you want
to egress traffic to a different interface, for example br-ex1, on a node. Egress traffic for pods not
impacted by MEG get routed to the default OVS br-ex bridge.
IMPORTANT
111
OpenShift Container Platform 4.17 Support
IMPORTANT
Currently, MEG is unsupported for use with other egress features, such as egress IP,
egress firewalls, or egress routers. Attempting to use MEG with egress features like
egress IP can result in routing and traffic flow conflicts. This occurs because of how
OVN-Kubernetes handles routing and source network address translation (SNAT). This
results in inconsistent routing and might break connections in some environments where
the return path must patch the incoming path.
You must define the additional bridge in an interface definition of a machine configuration manifest file.
The Machine Config Operator uses the manifest to create a new file at /etc/ovnk/extra_bridge on the
host. The new file includes the name of the network interface that the additional OVS bridge configures
for a node.
After you create and edit the manifest file, the Machine Config Operator completes tasks in the
following order:
1. Drains nodes in singular order based on the selected machine configuration pool.
2. Injects Ignition configuration files into each node, so that each node receives the additional br-
ex1 bridge network configuration.
3. Verify that the br-ex MAC address matches the MAC address for the interface that br-ex uses
for the network connection.
4. Executes the configure-ovs.sh shell script that references the new interface definition.
NOTE
After all the nodes return to the Ready state and the OVN-Kubernetes Operator detects
and configures br-ex and br-ex1, the Operator applies the k8s.ovn.org/l3-gateway-
config annotation to each node.
For more information about useful situations for the additional br-ex1 bridge and a situation that always
requires the default br-ex bridge, see "Configuration for a localnet topology".
Procedure
1. Optional: Create an interface connection that your additional bridge, br-ex1, can use by
completing the following steps. The example steps show the creation of a new bond and its
dependent interfaces that are all defined in a machine configuration manifest file. The additional
bridge uses the MachineConfig object to form a additional bond interface.
IMPORTANT
Also ensure that the additional interface or sub-interfaces when defining a bond
interface are not used by an existing br-ex OVN Kubernetes network deployment.
112
CHAPTER 7. TROUBLESHOOTING
a. Create the following interface definition files. These files get added to a machine
configuration manifest file so that host nodes can access the definition files.
[connection]
id=eno1
type=ethernet
interface-name=eno1
master=bond1
slave-type=bond
autoconnect=true
autoconnect-priority=20
[connection]
id=eno2
type=ethernet
interface-name=eno2
master=bond1
slave-type=bond
autoconnect=true
autoconnect-priority=20
Example of the second bond interface definition file that is named bond1.config
[connection]
id=bond1
type=bond
interface-name=bond1
autoconnect=true
connection.autoconnect-slaves=1
autoconnect-priority=20
[bond]
mode=802.3ad
miimon=100
xmit_hash_policy="layer3+4"
[ipv4]
method=auto
b. Convert the definition files to Base64 encoded strings by running the following command:
$ base64 <directory_path>/en01.config
$ base64 <directory_path>/eno2.config
$ base64 <directory_path>/bond1.config
2. Prepare the environment variables. Replace <machine_role> with the node role, such as
worker, and replace <interface_name> with the name of your additional br-ex bridge name.
113
OpenShift Container Platform 4.17 Support
$ export ROLE=<machine_role>
Example of a machine configuration file with definitions added for bond1, eno1 , and
en02
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: ${worker}
name: 12-${ROLE}-sec-bridge-cni
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:;base64,<base-64-encoded-contents-for-bond1.conf>
path: /etc/NetworkManager/system-connections/bond1.nmconnection
filesystem: root
mode: 0600
- contents:
source: data:;base64,<base-64-encoded-contents-for-eno1.conf>
path: /etc/NetworkManager/system-connections/eno1.nmconnection
filesystem: root
mode: 0600
- contents:
source: data:;base64,<base-64-encoded-contents-for-eno2.conf>
path: /etc/NetworkManager/system-connections/eno2.nmconnection
filesystem: root
mode: 0600
# ...
4. Create a machine configuration manifest file for configuring the network plugin by entering the
following command in your terminal:
$ oc create -f <machine_config_file_name>
5. Create an Open vSwitch (OVS) bridge, br-ex1, on nodes by using the OVN-Kubernetes network
plugin to create an extra_bridge file`. Ensure that you save the file in the
/etc/ovnk/extra_bridge path of the host. The file must state the interface name that supports
the additional bridge and not the default interface that supports br-ex, which holds the primary
IP address of the node.
bond1
6. Create a machine configuration manifest file that defines the existing static interface that hosts
br-ex1 on any nodes restarted on your cluster:
114
CHAPTER 7. TROUBLESHOOTING
Example of a machine configuration file that defines bond1 as the interface for
hosting br-ex1
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: ${worker}
name: 12-worker-extra-bridge
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- path: /etc/ovnk/extra_bridge
mode: 0420
overwrite: true
contents:
source: data:text/plain;charset=utf-8,bond1
filesystem: root
$ oc create -f <machine_config_file_name>
8. Optional: You can override the br-ex selection logic for nodes by creating a machine
configuration file that in turn creates a /var/lib/ovnk/iface_default_hint resource.
NOTE
The resource lists the name of the interface that br-ex selects for your cluster.
By default, br-ex selects the primary interface for a node based on boot order
and the IP address subnet in the machine network. Certain machine network
configurations might require that br-ex continues to select the default interfaces
or bonds for a host node.
a. Create a machine configuration file on the host node to override the default interface.
IMPORTANT
Only create this machine configuration file for the purposes of changing the
br-ex selection logic. Using this file to change the IP addresses of existing
nodes in your cluster is not supported.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: ${worker}
name: 12-worker-br-ex-override
115
OpenShift Container Platform 4.17 Support
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- path: /var/lib/ovnk/iface_default_hint
mode: 0420
overwrite: true
contents:
source: data:text/plain;charset=utf-8,bond0 1
filesystem: root
1 Ensure bond0 exists on the node before you apply the machine configuration file to
the node.
b. Before you apply the configuration to all new nodes in your cluster, reboot the host node to
verify that br-ex selects the intended interface and does not conflict with the new
interfaces that you defined on br-ex1.
c. Apply the machine configuration file to all new nodes in your cluster:
$ oc create -f <machine_config_file_name>
Verification
1. Identify the IP addresses of nodes with the exgw-ip-addresses label in your cluster to verify
that the nodes use the additional bridge instead of the default bridge:
Example output
"k8s.ovn.org/l3-gateway-config":
\"exgw-ip-address\":\"172.xx.xx.yy/24\",\"next-hops\":[\"xx.xx.xx.xx\"],
2. Observe that the additional bridge exists on target nodes by reviewing the network interface
names on the host node:
Example output
3. Optional: If you use /var/lib/ovnk/iface_default_hint, check that the MAC address of br-ex
116
CHAPTER 7. TROUBLESHOOTING
3. Optional: If you use /var/lib/ovnk/iface_default_hint, check that the MAC address of br-ex
matches the MAC address of the primary selected interface:
Example output that shows the primary interface for br-ex as bond0
Additional resources
If you modify the log level on a node temporarily, be aware that you can receive log messages from the
machine config daemon on the node like the following example:
To avoid the log messages related to the mismatch, revert the log level change after you complete your
troubleshooting.
For short-term troubleshooting, you can configure the Open vSwitch (OVS) log level temporarily. The
following procedure does not require rebooting the node. In addition, the configuration change does not
persist whenever you reboot the node.
After you perform this procedure to change the log level, you can receive log messages from the
machine config daemon that indicate a content mismatch for the ovs-vswitchd.service. To avoid the
log messages, repeat this procedure and set the log level to the original value.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
117
OpenShift Container Platform 4.17 Support
Procedure
$ oc debug node/<node_name>
2. Set /host as the root directory within the debug shell. The debug pod mounts the root file
system from the host in /host within the pod. By changing the root directory to /host, you can
run binaries from the host file system:
# chroot /host
# ovs-appctl vlog/list
The following example output shows the log level for syslog set to info.
Example output
Restart=always
ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :$${OVS_USER_ID##*:} /var/lib/openvswitch'
ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :$${OVS_USER_ID##*:} /etc/openvswitch'
ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :$${OVS_USER_ID##*:} /run/openvswitch'
ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg
ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg
In the preceding example, the log level is set to dbg. Change the last two lines by setting
118
CHAPTER 7. TROUBLESHOOTING
In the preceding example, the log level is set to dbg. Change the last two lines by setting
syslog:<log_level> to off, emer, err, warn, info, or dbg. The off log level filters out all log
messages.
# systemctl daemon-reload
For long-term changes to the Open vSwitch (OVS) log level, you can change the log level permanently.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master 1
name: 99-change-ovs-loglevel
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- dropins:
- contents: |
[Service]
ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2
ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg
name: 20-ovs-vswitchd-restart.conf
name: ovs-vswitchd.service
1 After you perform this procedure to configure control plane nodes, repeat the procedure
and set the role to worker to configure worker nodes.
2 Set the syslog:<log_level> value. Log levels are off, emer, err, warn, info, or dbg. Setting
the value to off filters out all log messages.
119
OpenShift Container Platform 4.17 Support
$ oc apply -f 99-change-ovs-loglevel.yaml
Additional resources
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
Display the logs by using the oc command from outside the cluster:
# journalctl -b -f -u ovs-vswitchd.service
OpenShift Container Platform 4.17 includes a default set of Operators that are required for proper
functioning of the cluster. These default Operators are managed by the Cluster Version Operator
(CVO).
As a cluster administrator, you can install application Operators from the OperatorHub using the
OpenShift Container Platform web console or the CLI. You can then subscribe the Operator to one or
more namespaces to make it available for developers on your cluster. Application Operators are
managed by Operator Lifecycle Manager (OLM).
If you experience Operator issues, verify Operator subscription status. Check Operator pod health
across the cluster and gather Operator logs for diagnosis.
120
CHAPTER 7. TROUBLESHOOTING
Condition Description
NOTE
Default OpenShift Container Platform cluster Operators are managed by the Cluster
Version Operator (CVO) and they do not have a Subscription object. Application
Operators are managed by Operator Lifecycle Manager (OLM) and they have a
Subscription object.
Additional resources
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
3. In the command output, find the Conditions section for the status of Operator subscription
condition types. In the following example, the CatalogSourcesUnhealthy condition type has a
status of false because all available catalog sources are healthy:
Example output
121
OpenShift Container Platform 4.17 Support
Example output
Name: cluster-logging
Namespace: openshift-logging
Labels: operators.coreos.com/cluster-logging.openshift-logging=
Annotations: <none>
API Version: operators.coreos.com/v1alpha1
Kind: Subscription
# ...
Conditions:
Last Transition Time: 2019-07-29T13:42:57Z
Message: all available catalogsources are healthy
Reason: AllCatalogSourcesHealthy
Status: False
Type: CatalogSourcesUnhealthy
# ...
NOTE
Default OpenShift Container Platform cluster Operators are managed by the Cluster
Version Operator (CVO) and they do not have a Subscription object. Application
Operators are managed by Operator Lifecycle Manager (OLM) and they have a
Subscription object.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. List the catalog sources in a namespace. For example, you can check the openshift-
marketplace namespace, which is used for cluster-wide catalog sources:
Example output
2. Use the oc describe command to get more details and status about a catalog source:
122
CHAPTER 7. TROUBLESHOOTING
Example output
Name: example-catalog
Namespace: openshift-marketplace
Labels: <none>
Annotations: operatorframework.io/managed-by: marketplace-operator
target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"}
API Version: operators.coreos.com/v1alpha1
Kind: CatalogSource
# ...
Status:
Connection State:
Address: example-catalog.openshift-marketplace.svc:50051
Last Connect: 2021-09-09T17:07:35Z
Last Observed State: TRANSIENT_FAILURE
Registry Service:
Created At: 2021-09-09T17:05:45Z
Port: 50051
Protocol: grpc
Service Name: example-catalog
Service Namespace: openshift-marketplace
# ...
In the preceding example output, the last observed state is TRANSIENT_FAILURE. This state
indicates that there is a problem establishing a connection for the catalog source.
3. List the pods in the namespace where your catalog source was created:
Example output
When a catalog source is created in a namespace, a pod for the catalog source is created in that
namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is
ImagePullBackOff. This status indicates that there is an issue pulling the catalog source’s index
image.
4. Use the oc describe command to inspect a pod for more detailed information:
Example output
Name: example-catalog-bwt8z
Namespace: openshift-marketplace
Priority: 0
123
OpenShift Container Platform 4.17 Support
Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 48s default-scheduler Successfully assigned openshift-
marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd
Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from
openshift-sdn
Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image
"quay.io/example-org/example-catalog:v1"
Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff
Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-
org/example-catalog:v1"
Warning Failed 8s (x3 over 47s) kubelet Failed to pull image
"quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading
manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested
resource is not authorized
Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull
In the preceding example output, the error messages indicate that the catalog source’s index
image is failing to pull successfully because of an authorization issue. For example, the index
image might be stored in a registry that requires login credentials.
Additional resources
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. List Operators running in the cluster. The output includes Operator version, availability, and up-
time information:
$ oc get clusteroperators
2. List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:
124
CHAPTER 7. TROUBLESHOOTING
$ oc debug node/my-node
b. Set /host as the root directory within the debug shell. The debug pod mounts the host’s
root file system in /host within the pod. By changing the root directory to /host, you can run
binaries contained in the host’s executable paths:
# chroot /host
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
However, if the OpenShift Container Platform API is not available, or the
kubelet is not properly functioning on the target node, oc operations will be
impacted. In such situations, it is possible to access nodes using ssh
core@<node>.<cluster_name>.<base_domain> instead.
c. List details about the node’s containers, including state and associated pod IDs:
# crictl ps
d. List information about a specific Operator container on the node. The following example
lists information about the network-operator container:
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
You have the fully qualified domain names of the control plane or control plane machines.
125
OpenShift Container Platform 4.17 Support
Procedure
1. List the Operator pods that are running in the Operator’s namespace, plus the pod status,
restarts, and age:
If an Operator pod has multiple containers, the preceding command will produce an error that
includes the name of each container. Query logs from an individual container:
3. If the API is not functional, review Operator pod and container logs on each control plane node
by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with
appropriate values.
b. For any Operator pods not showing a Ready status, inspect the pod’s status in detail.
Replace <operator_pod_id> with the Operator pod’s ID listed in the output of the
preceding command:
d. For any Operator container not showing a Ready status, inspect the container’s status in
detail. Replace <container_id> with a container ID listed in the output of the preceding
command:
e. Review the logs for any Operator containers not showing a Ready status. Replace
<container_id> with a container ID listed in the output of the preceding command:
NOTE
126
CHAPTER 7. TROUBLESHOOTING
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
Before attempting to collect diagnostic data over SSH, review whether the
data collected by running oc adm must gather and other oc commands is
sufficient instead. However, if the OpenShift Container Platform API is not
available, or the kubelet is not properly functioning on the target node, oc
operations will be impacted. In such situations, it is possible to access nodes
using ssh core@<node>.<cluster_name>.<base_domain>.
NOTE
When the MCO detects any of the following changes, it applies the update
without draining or rebooting the node:
To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic
rebooting after the Operator makes changes to the machine config.
7.6.6.1. Disabling the Machine Config Operator from automatically rebooting by using the
console
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can
127
OpenShift Container Platform 4.17 Support
use the OpenShift Container Platform web console to modify the machine config pool (MCP) to
prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would
normally be part of the MCO update process.
NOTE
See second NOTE in Disabling the Machine Config Operator from automatically
rebooting.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
To pause or unpause automatic MCO update rebooting:
1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin
role.
3. On the MachineConfigPools page, click either master or worker, depending upon which
nodes you want to pause rebooting for.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
# ...
spec:
# ...
paused: true 1
# ...
If the MCP has pending changes while paused, the Updated column is False and Updating
is False. When Updated is True and Updating is False, there are no pending changes.
IMPORTANT
128
CHAPTER 7. TROUBLESHOOTING
IMPORTANT
If there are pending changes (where both the Updated and Updating
columns are False), it is recommended to schedule a maintenance window
for a reboot as early as possible. Use the following steps for unpausing the
autoreboot process to apply the changes that were queued since the last
reboot.
1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin
role.
3. On the MachineConfigPools page, click either master or worker, depending upon which
nodes you want to pause rebooting for.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
# ...
spec:
# ...
paused: false 1
# ...
NOTE
By unpausing an MCP, the MCO applies all paused changes reboots Red Hat
Enterprise Linux CoreOS (RHCOS) as needed.
If the MCP is applying any pending changes, the Updated column is False and the
Updating column is True. When Updated is True and Updating is False, there are no
further changes being made.
7.6.6.2. Disabling the Machine Config Operator from automatically rebooting by using the
CLI
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can
modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making
any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO
update process.
129
OpenShift Container Platform 4.17 Support
NOTE
See second NOTE in Disabling the Machine Config Operator from automatically
rebooting.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
To pause or unpause automatic MCO update rebooting:
1. Update the MachineConfigPool custom resource to set the spec.paused field to true.
Worker nodes
Worker nodes
Example output
true
# oc get machineconfigpool
Example output
130
CHAPTER 7. TROUBLESHOOTING
If the UPDATED column is False and UPDATING is False, there are pending changes.
When UPDATED is True and UPDATING is False, there are no pending changes. In the
previous example, the worker node has pending changes. The control plane node does not
have any pending changes.
IMPORTANT
If there are pending changes (where both the Updated and Updating
columns are False), it is recommended to schedule a maintenance window
for a reboot as early as possible. Use the following steps for unpausing the
autoreboot process to apply the changes that were queued since the last
reboot.
1. Update the MachineConfigPool custom resource to set the spec.paused field to false.
Worker nodes
NOTE
By unpausing an MCP, the MCO applies all paused changes and reboots Red
Hat Enterprise Linux CoreOS (RHCOS) as needed.
Worker nodes
Example output
false
$ oc get machineconfigpool
Example output
131
OpenShift Container Platform 4.17 Support
If the MCP is applying any pending changes, the UPDATED column is False and the
UPDATING column is True. When UPDATED is True and UPDATING is False, there are no
further changes being made. In the previous example, the MCO is updating the worker
node.
Example output
ImagePullBackOff for
Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-
bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
Example output
rpc error: code = Unknown desc = error pinging docker registry example.com: Get
"https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and
other related objects. After recreating the subscription, OLM then reinstalls the correct version of the
Operator.
Prerequisites
You have a failing subscription that is unable to pull an inaccessible bundle image.
Procedure
1. Get the names of the Subscription and ClusterServiceVersion objects from the namespace
where the Operator is installed:
Example output
132
CHAPTER 7. TROUBLESHOOTING
REPLACES PHASE
clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift
Elasticsearch Operator 5.0.0-65 Succeeded
4. Get the names of any failing jobs and related config maps in the openshift-marketplace
namespace:
Example output
This ensures pods that try to pull the inaccessible image are not recreated.
Verification
133
OpenShift Container Platform 4.17 Support
...
message: 'Failed to delete all resource types, 1 remaining: Internal error occurred:
error resolving resource'
...
These types of issues can prevent an Operator from being reinstalled successfully.
WARNING
The following procedure shows how to troubleshoot when an Operator cannot be reinstalled because an
existing custom resource definition (CRD) from a previous installation of the Operator is preventing a
related namespace from deleting successfully.
Procedure
1. Check if there are any namespaces related to the Operator that are stuck in "Terminating" state:
$ oc get namespaces
Example output
operator-ns-1 Terminating
2. Check if there are any CRDs related to the Operator that are still present after the failed
uninstallation:
$ oc get crds
NOTE
CRDs are global cluster definitions; the actual custom resource (CR) instances
related to the CRDs could be in other namespaces or be global cluster instances.
3. If there are any CRDs that you know were provided or managed by the Operator and that should
have been deleted after uninstallation, delete the CRD:
4. Check if there are any remaining CR instances related to the Operator that are still present after
uninstallation, and if so, delete the CRs:
134
CHAPTER 7. TROUBLESHOOTING
a. The type of CRs to search for can be difficult to determine after uninstallation and can
require knowing what CRDs the Operator manages. For example, if you are troubleshooting
an uninstallation of the etcd Operator, which provides the EtcdCluster CRD, you can search
for remaining EtcdCluster CRs in a namespace:
b. If there are any remaining CRs that should be removed, delete the instances:
IMPORTANT
If the namespace or other Operator resources are still not uninstalled cleanly,
contact Red Hat Support.
Verification
Additional resources
After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed.
Depending on policy and exit code, Pods are either removed after exiting or retained so that their logs
can be accessed.
The first thing to check when pod issues arise is the pod’s status. If an explicit pod failure has occurred,
observe the pod’s error state to identify specific image, container, or pod network issues. Focus
diagnostic data collection according to the error state. Review pod event messages, as well as pod and
135
OpenShift Container Platform 4.17 Support
container log information. Diagnose issues dynamically by accessing running Pods on the command line,
or start a debug pod with root access based on a problematic pod’s deployment configuration.
The following table provides a list of pod error states along with their descriptions.
ErrImageNeverP PullPolicy is set to NeverPullImage and the target image is not present locally on
ull the host.
ErrRegistryUna When attempting to retrieve an image from a registry, an HTTP error was encountered.
vailable
ErrContainerNot The specified container is either not present or not managed by the kubelet, within the
Found declared pod.
ErrCrashLoopB A container has terminated. The kubelet will not attempt to restart it.
ackOff
136
CHAPTER 7. TROUBLESHOOTING
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
skopeo is installed.
Procedure
$ oc project <project_name>
2. List pods running within the namespace, as well as pod status, error states, restarts, and age:
$ oc get pods
$ oc status
5. If the base image reference is not correct, update the reference in the deployment
configuration:
137
OpenShift Container Platform 4.17 Support
$ oc edit deployment/my-deployment
6. When deployment configuration changes on exit, the configuration will automatically redeploy.
Watch pod status as the deployment progresses, to determine whether the issue has been
resolved:
$ oc get pods -w
7. Review events within the namespace for diagnostic information relating to pod failures:
$ oc get events
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
$ oc logs <pod_name>
Logs retrieved using the preceding oc logs commands are composed of messages sent to
stdout within pods or containers.
Example output
total 124K
drwxr-xr-x. 1 root root 33 Aug 11 11:23 .
drwxr-xr-x. 1 root root 28 Sep 6 2022 ..
-rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp
-rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log
-rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log
138
CHAPTER 7. TROUBLESHOOTING
Example output
c. List log files and subdirectories contained in /var/log within a specific container:
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Switch into the project that contains the pod you would like to access. This is necessary
because the oc rsh command does not accept the -n namespace option:
$ oc project <namespace>
139
OpenShift Container Platform 4.17 Support
$ oc rsh <pod_name> 1
1 If a pod has multiple containers, oc rsh defaults to the first container unless -c
<container_name> is specified.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
b. Start a debug pod with root privileges, based on the deployment configuration:
140
CHAPTER 7. TROUBLESHOOTING
NOTE
You can append -- <command> to the preceding oc debug commands to run individual
commands within a debug pod, instead of running an interactive shell.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
NOTE
For oc cp to function, the tar binary must be available within the container.
To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to
each of the following S2I stages:
1. During the build configuration stage, a build pod is used to create an application container
image from a base image and application source code.
141
OpenShift Container Platform 4.17 Support
2. During the deployment configuration stage, a deployment pod is used to deploy application
pods from the application container image that was built in the build configuration stage. The
deployment pod also deploys other resources such as services and routes. The deployment
configuration begins after the build configuration succeeds.
3. After the deployment pod has started the application pods, application failures can occur
within the running application pods. For instance, an application might not behave as expected
even though the application pods are in a Running state. In this scenario, you can access
running application pods to investigate application failures within a pod.
2. Determine the stage of the S2I process where the problem occurred
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Watch the pod status throughout the S2I process to determine at which stage a failure occurs:
$ oc get pods -w 1
1 Use -w to monitor pods for changes until you quit the command using Ctrl+C.
$ oc logs -f pod/<application_name>-<build_number>-build
NOTE
Alternatively, you can review the build configuration’s logs using oc logs -f
bc/<application_name>. The build configuration’s logs include the logs from
the build pod.
142
CHAPTER 7. TROUBLESHOOTING
$ oc logs -f pod/<application_name>-<build_number>-deploy
NOTE
$ oc logs -f pod/<application_name>-<build_number>-<random_string>
Review the logs from the application pods, including application-specific log files that are not
collected by the OpenShift Logging framework.
Test application functionality interactively and run diagnostic tools in an application container.
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. List events relating to a specific application pod. The following example retrieves events for an
application pod named my-app-1-akdlg:
$ oc describe pod/my-app-1-akdlg
$ oc logs -f pod/my-app-1-akdlg
3. Query specific logs within a running application pod. Logs that are sent to stdout are collected
by the OpenShift Logging framework and are included in the output of the preceding
command. The following query is only required for logs that are not sent to stdout.
a. If an application log can be accessed without root privileges within a pod, concatenate the
log file as follows:
143
OpenShift Container Platform 4.17 Support
b. If root access is required to view an application log, you can start a debug container with root
privileges and then view the log file from within the container. Start the debug container
from the project’s DeploymentConfig object. Pod users typically run with non-root
privileges, but running troubleshooting pods with temporary root privileges can be useful
during issue investigation:
NOTE
You can access an interactive shell with root access within the debug pod if
you run oc debug dc/<deployment_configuration> --as-root without
appending -- <command>.
4. Test application functionality interactively and run diagnostic tools, in an application container
with an interactive shell.
b. Test application functionality interactively from within the shell. For example, you can run
the container’s entry point command and observe the results. Then, test changes from the
command line directly, before updating the source code and rebuilding the application
container through the S2I process.
NOTE
5. If diagnostic binaries are not available within a container, you can run a host’s diagnostic binaries
within a container’s namespace by using nsenter. The following example runs ip ad within a
container’s namespace, using the host`s ip binary.
a. Enter into a debug session on the target node. This step instantiates a debug pod called
<node_name>-debug:
$ oc debug node/my-cluster-node
b. Set /host as the root directory within the debug shell. The debug pod mounts the host’s
root file system in /host within the pod. By changing the root directory to /host, you can run
binaries contained in the host’s executable paths:
# chroot /host
NOTE
144
CHAPTER 7. TROUBLESHOOTING
NOTE
OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise
Linux CoreOS (RHCOS) are immutable and rely on Operators to apply
cluster changes. Accessing cluster nodes by using SSH is not recommended.
However, if the OpenShift Container Platform API is not available, or the
kubelet is not properly functioning on the target node, oc operations will be
impacted. In such situations, it is possible to access nodes using ssh
core@<node>.<cluster_name>.<base_domain> instead.
# crictl ps
d. Determine the container’s process ID. In this example, the target container ID is
a7fe32346b120:
# crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print $2}'
e. Run ip ad within the container’s namespace, using the host’s ip binary. This example uses
31150 as the container’s process ID. The nsenter command enters the namespace of a
target process and runs a command in its namespace. Because the target process in this
example is a container’s process ID, the ip ad command is run in the container’s namespace
from the host:
# nsenter -n -t 31150 -- ip ad
NOTE
However, mounting on a new node is not possible because the failed node is unable to unmount the
attached volume.
Example output
145
OpenShift Container Platform 4.17 Support
Procedure
To resolve the multi-attach issue, use one of the following solutions:
If you encounter a multi-attach error message with an RWO volume, force delete the pod on a
shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic
persistent volumes are attached.
This command deletes the volumes stuck on shutdown or crashed nodes after six minutes.
The WMCO requires your OpenShift Container Platform cluster to be configured with hybrid networking
using OVN-Kubernetes; the WMCO cannot complete the installation process without hybrid networking
available. This is necessary to manage nodes on multiple operating systems (OS) and OS variants. This
must be completed during the installation of your cluster.
7.10.2. Investigating why Windows Machine does not become compute node
There are various reasons why a Windows Machine does not become a compute node. The best way to
investigate this problem is to collect the Windows Machine Config Operator (WMCO) logs.
Prerequisites
You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle
Manager (OLM).
Procedure
146
CHAPTER 7. TROUBLESHOOTING
Prerequisites
You have installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle
Manager (OLM).
You have added the key used in the cloud-private-key secret and the key used when creating
the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-
agent after use.
Procedure
1 Specify the cloud provider username, such as Administrator for Amazon Web Services
(AWS) or capi for Microsoft Azure.
2 Specify the internal IP address of the node, which can be discovered by running the
following command:
You can access a Windows node by using a Remote Desktop Protocol (RDP).
Prerequisites
You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle
147
OpenShift Container Platform 4.17 Support
You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle
Manager (OLM).
You have added the key used in the cloud-private-key secret and the key used when creating
the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-
agent after use.
Procedure
$ ssh -L 2020:<windows_node_internal_ip>:3389 \ 1
core@$(oc get service --all-namespaces -l run=ssh-bastion -o go-template="{{ with (index
(index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")
1 Specify the internal IP address of the node, which can be discovered by running the
following command:
2. From within the resulting shell, SSH into the Windows node and run the following command to
create a password for the user:
1 Specify the cloud provider user name, such as Administrator for AWS or capi for Azure.
You can now remotely access the Windows node at localhost:2020 using an RDP client.
Prerequisites
You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle
Manager (OLM).
Procedure
1. To view the logs under all directories in C:\var\logs, run the following command:
148
CHAPTER 7. TROUBLESHOOTING
/ip-10-0-138-252.us-east-2.compute.internal containers \
/ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay \
/ip-10-0-138-252.us-east-2.compute.internal kube-proxy \
/ip-10-0-138-252.us-east-2.compute.internal kubelet \
/ip-10-0-138-252.us-east-2.compute.internal pods
2. You can now list files in the directories using the same command and view the individual log
files. For example, to view the kubelet logs, run the following command:
Prerequisites
You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle
Manager (OLM).
Procedure
To view logs from all applications logging to the event logs on the Windows machine, run:
The same command is executed when collecting logs with oc adm must-gather.
Other Windows application logs from the event log can also be collected by specifying the
respective service with a -u flag. For example, you can run the following command to collect
logs for the docker runtime service:
Prerequisites
You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle
Manager (OLM).
Procedure
149
OpenShift Container Platform 4.17 Support
C:\> powershell
Prerequisites
You have access to the cluster as a user with the cluster-admin role.
Procedure
1. Check that the corresponding labels matchin the service and ServiceMonitor resource
configurations.
a. Obtain the label defined in the service. The following example queries the prometheus-
example-app service in the ns1 project:
150
CHAPTER 7. TROUBLESHOOTING
Example output
labels:
app: prometheus-example-app
Example output
apiVersion: v1
kind: ServiceMonitor
metadata:
name: prometheus-example-monitor
namespace: ns1
spec:
endpoints:
- interval: 30s
port: web
scheme: http
selector:
matchLabels:
app: prometheus-example-app
NOTE
Example output
151
OpenShift Container Platform 4.17 Support
If there is a issue with the service monitor, the logs might include an error similar to this
example:
3. Review the target status for your endpointon the Metrics targets page in the OpenShift
Container Platform web console UI.
a. Log in to the OpenShift Container Platform web console and navigate to Observe →
Targets in the Administrator perspective.
b. Locate the metrics endpoint in the list, and review the status of the target in the Status
column.
c. If the Status is Down, click the URL for the endpoint to view more information on the
Target Details page for that metrics target.
4. Configure debug level logging for the Prometheus Operatorin the openshift-user-
workload-monitoring project.
b. Add logLevel: debug for prometheusOperator under data/config.yaml to set the log
level to debug:
apiVersion: v1
kind: ConfigMap
metadata:
name: user-workload-monitoring-config
namespace: openshift-user-workload-monitoring
data:
config.yaml: |
prometheusOperator:
logLevel: debug
# ...
c. Save the file to apply the changes. The affected prometheus-operator pod is
automatically redeployed.
d. Confirm that the debug log-level has been applied to the prometheus-operator
deployment in the openshift-user-workload-monitoring project:
152
CHAPTER 7. TROUBLESHOOTING
Example output
- --log-level=debug
Debug level logging will show all calls made by the Prometheus Operator.
NOTE
f. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor
resource. Review the logs for other related errors.
Additional resources
See Specifying how a service is monitored for details on how to create a service monitor or pod
monitor
Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels
can result in an exponential increase in the number of time series created. This can impact Prometheus
performance and can consume a lot of disk space.
You can use the following measures when Prometheus consumes a lot of disk:
Check the time series database (TSDB) status using the Prometheus HTTP APIfor more
information about which labels are creating the most time series data. Doing so requires cluster
administrator privileges.
Reduce the number of unique time series that are createdby reducing the number of
unbound attributes that are assigned to user-defined metrics.
NOTE
Using attributes that are bound to a limited set of possible values reduces the
number of potential key-value pair combinations.
153
OpenShift Container Platform 4.17 Support
Enforce limits on the number of samples that can be scrapedacross user-defined projects.
This requires cluster administrator privileges.
Prerequisites
You have access to the cluster as a user with the cluster-admin cluster role.
Procedure
2. Enter a Prometheus Query Language (PromQL) query in the Expression field. The following
example queries help to identify high cardinality metrics that might result in high disk space
consumption:
By running the following query, you can identify the ten jobs that have the highest number
of scrape samples:
By running the following query, you can pinpoint time series churn by identifying the ten
jobs that have created the most time series data in the last hour:
3. Investigate the number of unbound label values assigned to metrics with higher than expected
scrape sample counts:
If the metrics relate to a user-defined project, review the metrics key-value pairs
assigned to your workload. These are implemented through Prometheus client libraries at
the application level. Try to limit the number of unbound attributes referenced in your labels.
If the metrics relate to a core OpenShift Container Platform project, create a Red Hat
support case on the Red Hat Customer Portal .
4. Review the TSDB status using the Prometheus HTTP API by following these steps when logged
in as a cluster administrator:
a. Get the Prometheus API route URL by running the following command:
c. Query the TSDB status for Prometheus by running the following command:
154
CHAPTER 7. TROUBLESHOOTING
Example output
"status": "success","data":{"headStats":{"numSeries":507473,
"numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010,
"maxTime":1712257935346},"seriesCountByMetricName":
[{"name":"etcd_request_duration_seconds_bucket","value":51840},
{"name":"apiserver_request_sli_duration_seconds_bucket","value":47718},
...
Additional resources
See Setting a scrape sample limit for user-defined projects for details on how to set a scrape
sample limit and create related alerting rules
The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-* pod in the
openshift-monitoring project has less than 3% total space remaining. This can cause Prometheus to
function abnormally.
NOTE
Critical alert: The alert with the severity="critical" label is triggered when the
mounted PV has less than 3% total space remaining.
Warning alert: The alert with the severity="warning" label is triggered when the
mounted PV has less than 15% total space remaining and is expected to fill up
within four days.
To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more
space for the PV.
Prerequisites
You have access to the cluster as a user with the cluster-admin cluster role.
Procedure
1. List the size of all TSDB blocks, sorted from oldest to newest, by running the following
command:
155
OpenShift Container Platform 4.17 Support
Example output
308M 01HVKMPKQWZYWS8WVDAYQHNMW6
52M 01HVK64DTDA81799TBR9QDECEZ
102M 01HVK64DS7TRZRWF2756KHST5X
140M 01HVJS59K11FBVAPVY57K88Z11
90M 01HVH2A5Z58SKT810EM6B9AT50
152M 01HV8ZDVQMX41MKCN84S32RRZ1
354M 01HV6Q2N26BK63G4RYTST71FBF
156M 01HV664H9J9Z1FTZD73RD1563E
216M 01HTHXB60A7F239HN7S2TENPNS
104M 01HTHMGRXGS0WXA3WATRXHR36B
2. Identify which and how many blocks could be removed, then remove the blocks. The following
example command removes the three oldest Prometheus TSDB blocks from the prometheus-
k8s-0 pod:
3. Verify the usage of the mounted PV and ensure there is enough space available by running the
following command:
The following example output shows the mounted PV claimed by the prometheus-k8s-0 pod
that has 63% of space remaining:
Example output
With the OpenShift CLI (oc), you can create applications and manage OpenShift Container Platform
156
CHAPTER 7. TROUBLESHOOTING
With the OpenShift CLI (oc), you can create applications and manage OpenShift Container Platform
projects from a terminal.
If oc command-specific issues arise, increase the oc log level to output API request, API response, and
curl request details generated by the command. This provides a granular view of a particular oc
command’s underlying operation, which in turn might provide insight into the nature of a failure.
oc log levels range from 1 to 10. The following table provides a list of oc log levels, along with their
descriptions.
8 Log API requests, headers, and body, plus API response headers and body to stderr.
9 Log API requests, headers, and body, API response headers and body, plus curl
requests to stderr.
10 Log API requests, headers, and body, API response headers and body, plus curl
requests to stderr, in verbose detail.
You can investigate OpenShift CLI (oc) issues by increasing the command’s log level.
The OpenShift Container Platform user’s current session token is typically included in logged curl
requests where required. You can also obtain the current user’s session token manually, for use when
testing aspects of an oc command’s underlying process step-by-step.
Prerequisites
Procedure
where:
<command>
Specifies the command you are running.
<log_level>
157
OpenShift Container Platform 4.17 Support
To obtain the current user’s session token, run the following command:
$ oc whoami -t
Example output
sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6...
158