0% found this document useful (0 votes)
17 views43 pages

Opticca - CodeReady-Single Node Openshift-REHL

This document provides instructions for deploying and configuring a single node OpenShift cluster using CodeReady Containers on Red Hat Enterprise Linux using the Azure Marketplace image. It describes downloading the CodeReady Containers image, deploying a virtual machine using the image on Azure, connecting to the virtual machine using SSH, and downloading authentication tokens for the OpenShift cluster.

Uploaded by

jalvarez82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views43 pages

Opticca - CodeReady-Single Node Openshift-REHL

This document provides instructions for deploying and configuring a single node OpenShift cluster using CodeReady Containers on Red Hat Enterprise Linux using the Azure Marketplace image. It describes downloading the CodeReady Containers image, deploying a virtual machine using the image on Azure, connecting to the virtual machine using SSH, and downloading authentication tokens for the OpenShift cluster.

Uploaded by

jalvarez82
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

CodeReady

Single Node Openshift


REHL OS
Opticca’s CodeReady — Single Node OpenShift VM delivers Red Hat OpenShift Local and is
the quickest way to get started building OpenShift clusters. It is designed to run on an Azure
virtual machine which simplifies setup and testing, and the cloud development environment
with all the tools needed to develop and test container-based applications.

Red Hat CodeReady Containers (CRC) brings a minimal, preconfigured OpenShift 4.+ cluster
for development and testing purposes.

Overview

CodeReady
Single Node
OpenShift VM

REHL OS
The OpenShift presented for CRC provides a regular OpenShift Container Platform
installation with the following differences:

● The OpenShift Container Platform cluster is ephemeral (state-less) and is not


Overview intended for production use.
● CRC does not have a supported upgrade path to newer OpenShift Container
Platform versions. Upgrading the OpenShift Container Platform version may cause
CodeReady issues that are difficult to reproduce. (CRC will not support upgrades to new versions.
Single Node However, newer images can be made available as new versions are available)
OpenShift VM
● It uses a single node which behaves as both a control plane and worker node.
● It disables the Cluster Monitoring Operator by default. This disabled Operator causes
REHL OS
the corresponding part of the web console to be non-functional.
● The OpenShift Container Platform cluster runs in a virtual machine known as an
instance. This may cause other differences, particularly with external networking.
The easiest and fastest way to get a CodeReady Container Openshift cluster environment up
and running.

With this solution, you can have a non-production Openshift environment in less than 30
minutes.
Benefits for
Features and benefits to run CodeReady Container on RHEL:
using Azure
Marketplace ● Deploy one node Openshift cluster in minutes. Quickly validate the readiness of your
core applications for migration to OpenShift or to new versions.
● A low-cost Openshift environment for testing.
○ A traditional OpenShift cluster, non-production environment, would require a
minimum of 5 nodes (3 masters, 2 worker/infra) installed on separate virtual
machines.
○ Azure ARO requires 6 nodes, 3 workers, 3 workers/infra
● Self-service for developers with minimal overhead on the infrastructure team.
You can use this solution:

● If you are new to OpenShift or want to test a new OpenShift version. Get started with
an Openshift cluster with the least effort and lowest cost.
Benefits for ● Prepare and test your application to run in a non-production Openshift Cluster.
using Azure ● Application validation in the latest (target) Openshift versions.
Marketplace ● Ensure applications deployed to a Kubernetes Cluster or older OpenShift versions are
stable in the latest OpenShift versions.

Documentation:

https://www.redhat.com/sysadmin/codeready-containers
https://access.redhat.com/documentation/en-us/red_hat_openshift_local/2.4
Operating System Details

Name Red Hat Enterprise Linux

Version 8

Image Details Size Standard D4s v3

Operating System and Prerequisite SW Version


Prerequisite SW
haproxy 1.8.23-3.el8

NetworkManager 1.22.8-9.el8_2

Binary crc-linux-2.5.1-amd64

CodeReady Version 2.5

Openshift Version 4.10


Built-in RBAC role in Azure i.e Virtual Machine Contributor is required at a minimum for
Required marketplace deployment of Opticca Image in Azure Virtual Machine. In case any custom
Permissions in RBAC role exists and needs to be utilized, please ensure you have the required permissions
Azure mentioned in the link.

The minimum scope for Virtual Machine creation can be at a resource group level, under
which you want to deploy your virtual machine.
Deployment If you don't have an Azure subscription, create a free account before you begin.
of VM using
Marketplace SIGN IN TO AZURE

Image Sign in to the Azure portal

Sign in to Azure
CREATE VIRTUAL MACHINE

1. Enter virtual machines in the search


2. Under Services, select Virtual machines
3. On the Virtual machines page, select Create and then Virtual machine. The Create a
Deployment virtual machine page opens.
of VM using 4. In the Basics tab, under Project details, make sure the correct subscription is selected
and then choose to Create new resource group. Enter a Resource group name (ie.
Marketplace myResourceGroup).
Image

Create Virtual Machine


5. Under Instance details, enter myVM for the Virtual machine name (or the name of the
VM you want to create), and choose
opticca_compute_gallery/codeready_openshift_4.10_RHEL8/0.0.2 for your Image.
Deployment If you don’t see the VM image name, since it’s not currently published on the
of VM using marketplace, click on See all Images and under Shared Images select
codeready_openshift_4.10_RHEL8/latest (the latest image is always selected when
Marketplace using this method):
Image

Create Virtual Machine


6. Leave all other defaults.
The default size and pricing are shown only as an example. Size availability and
pricing are dependent on your region and subscription. It’s recommended to use the
following size : Standard_D4_v3, Standard_D8_v3, Standard_D4s_v3,
Standard_D8s_v3, Standard_E4s_v3, Standard_E4_v3.

Deployment
of VM using
Marketplace
Image

Create Virtual Machine


7. Under Administrator account, select Password.
8. In Username enter azureuser (you can set your own username).
Deployment 9. In Password enter your own password of 16 characters. Enter the same in Confirm

of VM using password.
10. Under Inbound port rules > Public inbound ports, choose Allow selected ports. In
Marketplace Select inbound ports, select SSH (22) from the drop-down. This option can also be
Image carried out in Step 12.
11. Under Licensing, select Other from the drop-down.
12. Click Next.
Create Virtual Machine
Note: You can skip the Disks tab and leave all defaults.
Deployment
of VM using
Marketplace
Image

Create Virtual Machine


13. Under the Networking tab, under Select inbound ports, enter 80, 443 as shown
below (if you want to enable RDP to the VM):

Deployment
of VM using
Marketplace
Image

Create Virtual Machine


14. Leave the remaining defaults as is and select Review + create button at the bottom of
the page.
15. On the Create a virtual machine page, you can see the details of the VM you’re about

Deployment to create. When ready, select Create.


16. When the deployment is finished, select Go to resource.
of VM using 17. On the page for your new VM, select the value in Public IP address and copy it to your
Marketplace clipboard.

Image

Create Virtual Machine


Deployment CONNECT TO VIRTUAL MACHINE

of VM using
Create an SSH connection with the VM.
Marketplace
Image 1. If you’re on a Mac or Linux machine, open a Bash prompt to connect using command
ssh <username>@<public ip address>

Connect to Virtual
Machine
2. If you’re on a Windows machine, open a PowerShell prompt , or if you’ve installed
putty or MobaXterm_portable, open it as shown below and connect by running the
command ssh <username>@<public ip address>:

Deployment
of VM using
Marketplace
Image

Connect to Virtual
Machine
After the creation of the VM based on the golden image, please follow these instructions:

1. Go to the Red Hat Cloud Portal and login with a valid account
a. URL: https://cloud.redhat.com
2. In the left menu bar, select Openshift > Downloads

Post
Configuration
of Azure VM
3. Under the Tokens section, download the Pull secret.

Post
Configuration
of Azure VM
4. Copy the Pull secret file to the Virtual Machine in the user home directory.

You can use your preferred method to copy the pull-secret.txt to the VM (ie. using scp
bash command):

$ scp pull-secret.txt ${user}@${VM_PUBLIC_IP}:~/


If you’re using MobaXterm after connecting to the VM, you can upload the Pull secret
using the GUI:

Post
Configuration
of Azure VM
5. Connect into the Virtual Machine and check the CRC binary:

$ crc version

CRC version: 2.5.1+3d569b8

OpenShift version: 4.10.18

Post Podman version: 4.1.0

Configuration
6. Setup the environment:
of Azure VM a. Configure the network-mode to system:

$ crc config set network-mode system

b. Execute cleanup and setup:

$ crc cleanup

$ crc setup
CRC prompts you before use for optional, anonymous usage data collection to
assist with development. No personally identifiable information is collected.
Consent for usage data collection can be granted or revoked by you at any
time.

Post
Configuration Accept or deny the telemetry data collection.
of Azure VM
● For more information about collected data, see the
https://developers.redhat.com/article/tool-data-collection.
● To grant or revoke consent for usage data collection, see
https://crc.dev/crc/#configuring-usage-data-collection_gsg.

This process can take several minutes to complete.

c. Start the CRC Instance.


After setup, start the CodeReady and Openshift instance. To start, you’ll need the Pull secret
that you copied to the Virtual Machine during setup.

Start CRC and In the Virtual Machine, run the start script passing the pull-secret file:
Openshift SNO
~# crc start -p /home/${user}/pull-secret.txt

You can start the crc without the pull-secret.txt file, but you’ll need to
manually copy the value of your Pull secret and pass during the start
NOTE process.
Using this method, you can’t run the crc-start.sh, instead use the ‘crc
start’ command.
This process can take several minutes to complete. After the instance has started, you’ll see
the cluster details and credentials:

Started the OpenShift cluster.

The server is accessible via web console at:

https://console-openshift-console.apps-crc.testing
Start CRC and
Log in as administrator:
Openshift SNO Username: kubeadmin

Password: ********* #<-- your kubeadmin credentials

Log in as user:

Username: developer

Password: developer

Use the 'oc' command line interface:

$ eval $(crc oc-env)

$ oc login -u developer https://api.crc.testing:6443


In some cases, you can encounter a timeout error during the
kube-apiserver availability:

Start CRC and INFO Waiting for kube-apiserver availability... [takes around 2min]
Openshift SNO Error waiting for apiserver: Temporary error: ssh command error:
command : timeout 5s oc get nodes --context admin --cluster crc
NOTE
--kubeconfig /opt/kubeconfig
err : Process exited with status 1
(x13)

Simply wait a few minutes and check the execution again by running the
command crc status.
Check the status of the Cluster:

Start CRC and $ crc status

Openshift SNO CRC VM: Running

OpenShift: Running (v4.10.18)

Podman:

Disk Usage: 15.96GB of 32.74GB (Inside the CRC VM)

Cache Usage: 16.87GB

Cache Directory: /home/someuser/.crc/cache


Access the OpenShift Container Platform cluster running in the CRC instance by using the
OpenShift Container Platform web console or OpenShift CLI (oc).

1. Accessing using Openshift CLI (oc)


a. Run the crc oc-env command to print the command needed to add the cached
oc executable to your $PATH:
Accessing the
OpenShift $ crc oc-env
Cluster
b. Run the printed command:
Accessing using
Openshift CLI (oc) $ eval $(crc oc-env)

c. Log in as the developer user:

$ oc login -u developer https://api.crc.testing:6443


d. Login as kubeadmin:

$ oc login -u kubeadmin https://api.crc.testing:6443

Accessing the The crc start command prints the password for the developer user. You
NOTE
OpenShift can also view it by running the crc console --credentials command.

Cluster
e. You can now use oc to interact with your OpenShift Container Platform cluster.
For example, to verify that the OpenShift Container Platform cluster Operators
Accessing using
are available, log in as the kubeadmin user and run the following command:
Openshift CLI (oc)

$ oc config use-context crc-admin


$ oc whoami
kubeadmin
2. Accessing using HAPROXY with a local machine
Accessing the
OpenShift The server comes with an internal haproxy to allow the connection between your local
machine and internal Openshift Web Console.
Cluster
CONFIGURE HAPROXY
Accessing using
HAPROXY a. Check the haproxy configuration:
$ cat /etc/haproxy/haproxy.cfg

global

debug
Accessing the
OpenShift defaults
Cluster log global

mode http
Accessing using
HAPROXY timeout connect 0

timeout client 0

timeout server 0
frontend apps

bind SERVER_IP:80

bind SERVER_IP:443

option tcplog
Accessing the mode tcp
OpenShift default_backend apps
Cluster
backend apps
Accessing using
HAPROXY mode tcp

balance roundrobin

option ssl-hello-chk

server webserver1 CRC_IP check


frontend api

bind SERVER_IP:6443

option tcplog

Accessing the mode tcp

OpenShift default_backend api

Cluster
backend api
Accessing using mode tcp
HAPROXY
balance roundrobin

option ssl-hello-chk

server webserver1 CRC_IP:6443 check


From the haproxy configuration, the file has 2 variables, SERVER_IP and CRC_IP.

Accessing the SERVER_IP needs to be replaced with the internal VM IP.

OpenShift CRC_IP needs to be replaced with the CRC IP.


Cluster
You can manually replace the variables or execute the following commands:
Accessing using
HAPROXY
To execute the CRC command, you first need to bring up the instance
NOTE
using crc start -p pull-secret
$ export SERVER_IP=$(hostname --ip-address)

$ export CRC_IP=$(crc ip)

Accessing the
$ sudo sed -i "s/SERVER_IP/$SERVER_IP/g"
OpenShift /etc/haproxy/haproxy.cfg
Cluster $ sudo sed -i "s/CRC_IP/$CRC_IP/g"
/etc/haproxy/haproxy.cfg
Accessing using
HAPROXY b. Start the haproxy server:

$ sudo systemctl start haproxy


CONFIGURE LOCAL DNS RESOLUTION

To be able to access using the Openshift URL, you need to configure your local DNS
resolution.

You’ll need administrator permission in order to configure your local DNS resolution.
This can change depending on your operation system.
Accessing the
OpenShift To be able to complete this configuration, you need to have the Public IP
NOTE
Cluster from your Virtual Machine in Azure.

Add in the hosts file, changing the ${AZ_PUBLIC_IP} to your Public IP:
Accessing using
HAPROXY
${AZ_PUBLIC_IP}
console-openshift-console.apps-crc.testing

${AZ_PUBLIC_IP} apps-crc.testing

${AZ_PUBLIC_IP} api.crc.testing

${AZ_PUBLIC_IP} oauth-openshift.apps-crc.testing
Find the location of the hosts files for each Operation System below:

For Windows

In Windows 10 the hosts file is located at c:\Windows\System32\Drivers\etc\hosts.


Right-click on Notepad in your Start menu and select Run as Administrator.

Accessing the For MacOS


OpenShift
Cluster Open the hosts file with sudo from the Terminal:

Accessing using $ sudo vim /private/etc/hosts


HAPROXY
For Linux RHEL/CentOS

Open the hosts file with sudo from the Terminal:

$ sudo vim /etc/hosts


ENABLE THE 80, 443 AND 6443 PORTS

Make sure that you have the ports open in your Network Security Group in your Azure
subscription for the Virtual Machine.

Accessing the Ports 80 and 443 were already opened when you created the Virtual Machine. Open
OpenShift port 6443 by following these steps:

Cluster
1. Open the Azure portal, http://portal.azure.com. Go to Virtual Machines and
click on the Virtual Machine.
Accessing using
HAPROXY
2. In Settings > Networking, add Inbound port rules:

Accessing the
OpenShift
Cluster

Accessing using
HAPROXY
3. Click on Add inbound port rule. In Destination port ranges, enter 6443. Set the
Priority to something higher when comparing to any Deny rule. Enter
Port_6443 as the rule Name and click Add:

Accessing the
OpenShift
Cluster

Accessing using
HAPROXY
Now that you have

● haproxy configured and running


● local DNS resolution configured
● ports 80, 443 and 6443 opened
Accessing the
OpenShift You can access the Openshift console using your browser:

Cluster URL: https://console-openshift-console.apps-crc.testing/

Accessing using Connect with the developer or kubeadmin users.


HAPROXY
To check the credentials execute:

$ crc console --credentials


Congratulations, you now have access to your cluster!

Accessing the
OpenShift
Cluster

Accessing using
HAPROXY
Accessing the
OpenShift
Cluster

Accessing using
HAPROXY

You might also like