0% found this document useful (0 votes)
382 views12 pages

CKA Test: Kubernetes Role and Policy Tasks

CKA Exam

Uploaded by

Arif Islam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
382 views12 pages

CKA Test: Kubernetes Role and Policy Tasks

CKA Exam

Uploaded by

Arif Islam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Create a new ClusterRole named deployment-clusterrole that only allows the

creation of the following resource types:

• Deployment

• StatefulSet

• DaemonSet

Create a new ServiceAccount named cicd-token in the existing

namespace app-team1.

Limited to namespace app-team1, bind the new ClusterRole

deployment-clusterrole to the new ServiceAccount cicd-token.

kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,statefulset,daemonset


kubectl create serviceaccount cicd-token -n app-team1
kubectl create rolebinding cicd-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=app-team1:

02 Task-English

Set the node named ek8s-node-1 as unavaliable and reschedule all the pods

running on it.

kubectl cordon ek8s-node-1


kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force

03 Task-English

Given an existing Kubernetes cluster running version 1.20.0, upgrade all of

Kubernetes control plane and node components on the master node only to

version 1.20.1.

You are also expected to upgrade kubelet and kubectl on the master node.
Be sure to drain the master node

before upgrading it and uncordon it after the upgrade.

Do not upgrade the worker nodes, etcd, the container manager, the CNI

plugin, the DNS service or any other addons.

$ kubectl config use-context mk8s


$ kubectl get node
$ kubectl cordon mk8s-master-1
$ kubectl drain mk8s-master-1 --delete-local-data --ignore-daemonsets --force
$ ssh mk8s-master-1
$ sudo -i
$ apt install kubeadm=1.20.1-00 -y
$ kubeadm version (检查kubeadm版本)
$ kubeadm upgrade plan
$ kubeadm upgrade apply v1.20.1 --etcd-upgrade=false
$ apt install kubelet=1.20.1-00 kubectl=1.20.1-00 -y
$ systemctl kubelet
$ exit
$ exit (如果使用sudo -i,这里一定要退出两次)
$ kubectl get node (确认只升级了master节点到1.20.1版本)

04 Task-Chinese

First, create a snapshot of the existing etcd instance running

on [Link] and save the snapshot to /data/backup/etcd-

[Link].

Creating a snapshot for a given instance is expected to be completed in a

few seconds. If the operation seems to hang, there may be a problem with the

command. Use ctrl+c to cancel the operation and try again.

Then restore the existing previous snapshot located at /var/data/etcd-

[Link].
The following TLS certificates and keys are provided to connect to the server

via etcdctl.

• ca certificate: /opt/KUIN00601/[Link]

• Client certificate: /opt/KUIN00601/[Link]

• Client key: /opt/KUIN00601/[Link]

# ETCDCTL_API=3 etcdctl --endpoint=[Link] --cert-file=/opt/KUIN00601/[Link] --k


[Link]
# ETCDCTL_API=3 etcdctl --endpoint=[Link] --cert-file=/opt/KUIN00601/[Link] --k
[Link]

05 Task-English

Create a new NetworkPolicy named allow-port-from-namespace to allow

Pods in the existing namespace internal to connect to port 8080 of other

Pods in the same namespace.

Ensure that the new NetworkPolicy:

• does not allow access to Pods not listening on port 8080 .

• does not allow access from Pods not in namespace internal .

创建networkPolicy,针对namespace internal下的pod,只允许同样namespace下的pod访问,并且可访问
9000端口。
不允许不是来自这个namespace的pod访问。
不允许不是监听9000端口的pod访问。
#[Link]
apiVersion: [Link]/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: internal
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
ports:
- port: 8080
# [Link]限定了这个namespace里的pod可以访问
kubectl create -f [Link]

06 Task-English

Reconfigure the existing deployment front-end and add a port specifiction

named http exposing port 80/tcp of the existing container nginx .

Create a new service named front-end-svc exposing the container prot http .

Configure the new service to also expose the individual Pods via a NodePort

on the nodes on which they are scheduled.

View existing deployment

kubectl get deployment


NAME READY UP-TO-DATE AVAILABLE AGE
front-end 1/1 1 1 18s

Edit, add port configuration

kubectl edit deployment front-end


spec:
containers:
\- image: nginx:1.14.2
imagePullPolicy: IfNotPresent
name: nginx
ports:
\- containerPort: 80
name: http
protocol: TCP
# 暴露出来
kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort

07 Task-English

Create a new nginx Ingress resource as follows:

• Name: ping

• Namespace: ing-internal

• Exposing service hi on path /hi using service port 5678

The avaliability of service hi can be checked using the following command,

which should return hi :

curl -kL /hi

vi [Link]
apiVersion: [Link]/v1
kind: Ingress
metadata:
name: ping
namespace: ing-internal
spec:
rules:
- http:
paths:
- path: /hi
pathType: Prefix
backend:
service:
name: hi
port:
number: 5678
kubectl create -f [Link]

08 Task-English

Scale the deployment presentation to 3 pods.


kubectl get deployment
kubectl scale [Link]/presentation --replicas=3

09 Task-English

Task

Schedule a pod as follows:

• name: nginx-kusc00401

• Image: nginx

• Node selector: disk-spinning

apiVersion: v1

kind: Pod

metadata:

name: nginx-kusc00401

spec:

containers:

o name: nginx

image: nginx

imagePullPolicy: IfNotPresent

nodeSelector:

disk: spinning

kubectl create -f [Link]

10 Task-English
Task

Check to see how many nodes are ready (not including nodes

tainted NoSchedule ) and write the number

to /opt/KUSC00402/[Link].

kubectl describe nodes | grep ready|wc -l


kubectl describe nodes | grep -i taint | grep -i noschedule |wc -l
echo 3 > /opt/KUSC00402/[Link]
kubectl get node | grep -i ready |wc -l
kubectl describe nodes | grep -i taints | grep -i noschedule |wc -l
echo 2 > /opt/KUSC00402/[Link]

11 Task-English

Create a pod named kucc8 with a single app container for each of the

following images running inside (there may be between 1 and 4 images

specified):

nginx + redis + memcached + consul.

kubectl run kucc8 --image=nginx --dry-run -o yaml > [Link]


apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kucc8
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consul
kubectl create -f [Link]

12 Task-English
Task

Create a persistent volume whit name app-config , of capacity 1Gi and

access mode ReadOnlyMany. The type of volume is hostPath and its location

is /srv/app-config.

apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /srv/app-config
kubectl create -f [Link]

13 Task-English

Task

Create a new PersistentVolumeClaim :

• Name: pv-volume

• Class: csi-hostpath-sc

• Capacity: 10Mi

Create a new Pod which mounts the PersistentVolumeClaim as a volume:

• Name: web-server

• Image: nginx

• Mount path: /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.


Finally, using kubectl edit or Kubectl patch expand

the PersistentVolumeClaim to a capacity of 70Mi and record that change.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
---
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv-volume
volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: pv-volume
kubectl create -f [Link]
kubectl edit pvc pv-volume --record

14 Task-English

Task

Monitor the logs of pod bar and:

• Extract log lines corresponding to error unable-to-access-website

• Write them to /opt/KUTR00101/bar


kubectl logs bar | grep'unable-to-access-website'> /opt/KUTR00101/bar

cat /opt/KUTR00101/bar

15 Task-English

Context

Without changing its existing containers, an existing Pod needs to be

integrated into Kubernetes's build-in logging architecture (eg kubectl logs).

Adding a streaming sidecar container is a good and common way to

accomplish this requirement.

Task

Add a busybox sidecar container to the existing Pod big-corp-app . The new

sidecar container has to run the following command:

/bin/sh -c tail -n+1 -f /var/log/[Link]

Use a volume mount named logs to make the file /var/log/big-corp-

[Link] available to the sidecar container.

Don't modify the existing container.

Don't modify the path of the log file,both containers must access it

at /var/log/[Link] .

apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
->
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/[Link];
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/[Link]']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: { }
kubectl logs big-corp-app -c count-log-1

16 Task-English

Form the pod label name-cpu-loader ,find pods running high CPU workloads

and write the name of the pod consuming most CPU to the

file /opt/KUTR00401/[Link] (which alredy exists).

Check the CPU usage of the Pod label with name=cpu-user-loader and write

the name of the pod with the highest CPU usage in the

/opt/KUTR00401/[Link] file

kubectl top pods -l name=name-cpu-loader --sort-by=cpu


echo '排名第一的pod名称' >>/opt/KUTR00401/[Link]

17 Task-English
Task

A Kubernetes worker node,named wk8s-node-0 is in state NotReady .

Investigate why this is the case,and perform any appropriate steps to bring

the node to a Ready state,ensuring that any changes are made permanent.

Yon can ssh to teh failed node using:

ssh wk8s-node-o

You can assume elevated privileges on the node with the following command:

sudo -i

#名为wk8s-node-1 的节点处于NotReady状态,将其恢复成Ready状态,并且设置为开机自启
# 连接到NotReady节点
ssh wk8s-node-0
#获取权限
sudo -i
# 查看服务是否运行正常
systemctl status kubelet
#如果服务非正常运行进行恢复
systemctl start kubelet
#设置开机自启
systemctl enable kubelet

Common questions

Powered by AI

To create a constrained NetworkPolicy, define an Ingress policy that specifies a podSelector and port. To limit access to Pods within the same namespace on a certain port, select ingress rule attributes appropriately. For example, to allow Pods in the ‘internal’ namespace to connect only via port 8080, create a NetworkPolicy like in `network.yaml` with port specified as 8080 and namespace as 'internal', ensuring other ports and namespaces are not accessible .

Scaling a Kubernetes Deployment impacts resource consumption and application performance. It involves adjusting the number of Pod replicas to meet demand. Precautions include ensuring adequate resources on nodes, understanding implications on load balancing, and monitoring potential performance bottlenecks. Achieving this is straightforward via the command `kubectl scale deployment.apps/<deployment-name> --replicas=<number>`. Ensure by verifying using `kubectl get deployment <deployment-name>` for the correct replica count .

To bind a ClusterRole to a ServiceAccount within a specific namespace, create a ClusterRole with the necessary permissions and a ServiceAccount in the desired namespace. Then, use a RoleBinding to associate the ClusterRole with the ServiceAccount in that namespace. For example, to allow a ServiceAccount named 'cicd-token' to create deployments, statefulsets, and daemonsets in the 'app-team1' namespace, create the ClusterRole and ServiceAccount, then bind them using: `kubectl create rolebinding cicd-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token` .

To upgrade Kubernetes control plane and node components on the master node, first ensure you are using the correct context with `kubectl config use-context`. Cordon and drain the master node using `kubectl cordon mk8s-master-1` and `kubectl drain mk8s-master-1 --delete-local-data --ignore-daemonsets --force`. Access the master node via SSH, and perform the upgrade by installing the new `kubeadm` version and applying the upgrade plan with `kubeadm upgrade apply`. Then upgrade `kubelet` and `kubectl` using `apt install`. After the upgrade, restart `kubelet` services, uncordon the master node with `kubectl uncordon`, and verify the upgrade with `kubectl get node` .

First, edit the existing Deployment to specify the container port under ports in the container spec section. For example, to expose port 80 for a Deployment using an nginx container, you add: `ports: [{containerPort: 80, name: http, protocol: TCP}]`. Then, expose the Deployment using the `kubectl expose` command, specifying the service type as NodePort: `kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort`. This will route traffic to the specified port on each node .

Draining a Kubernetes node before maintenance is crucial to safely migrate workloads and prevent disruption. It ensures that all managed Pods are evicted and rescheduled onto other nodes, preserving application availability. This process is done using `kubectl drain <node-name> --delete-local-data --ignore-daemonsets --force`, which safely evicts Pods excluding Daemonsets, so maintenance can proceed without impacting service continuity .

Node selectors in Kubernetes limit Pod scheduling to nodes with specific labels, allowing targeted resource allocation. By setting a node selector like `nodeSelector: disk: spinning`, the scheduler only places the Pod on nodes with the 'spinning' label. Challenges include decreased scheduling flexibility, potential resource bottlenecks, and increased complexity in managing node labels, which might lead to underutilized nodes or impact autoscaling dynamics .

For shared read access, create a PersistentVolume (PV) with access modes including `ReadOnlyMany`, specifying the `hostPath` if applicable for on-node storage. For example, a PV can be created with `hostPath: /srv/app-config` and capacity `1Gi`. Next, create a PersistentVolumeClaim (PVC) matching the PV's storage class and access modes. Validate by ensuring the PVC is bound to the PV using `kubectl get pvc` and checking the status. This setup is crucial for applications needing read-only shared filesystem access among multiple Pods .

To bring a 'NotReady' node back to 'Ready', first diagnose the issue by accessing the node and inspecting the status of services, especially kubelet, using `systemctl status kubelet`. If kubelet is not active, start it with `systemctl start kubelet`. To ensure it remains in a 'Ready' state even after rebooting, enable the kubelet service to start at boot using `systemctl enable kubelet`. This ensures the node is always configured to automatically start the necessary services required for a 'Ready' state .

Using a sidecar container for logging allows centralized management of log streams for an application Pod, facilitating integration with Kubernetes' logging architecture. Sidecars separate application logic from logging, enabling independent scaling and updates. To implement, add a busybox sidecar to the Pod spec running a command like `/bin/sh -c tail -n+1 -f /var/log/big-corp-app.log`, and share log files using a volume mount. This configuration is advantageous for simplifying log aggregation but requires monitoring of resource consumption and ensuring fault isolation .

You might also like