CKA Test: Kubernetes Role and Policy Tasks
CKA Test: Kubernetes Role and Policy Tasks
To create a constrained NetworkPolicy, define an Ingress policy that specifies a podSelector and port. To limit access to Pods within the same namespace on a certain port, select ingress rule attributes appropriately. For example, to allow Pods in the ‘internal’ namespace to connect only via port 8080, create a NetworkPolicy like in `network.yaml` with port specified as 8080 and namespace as 'internal', ensuring other ports and namespaces are not accessible .
Scaling a Kubernetes Deployment impacts resource consumption and application performance. It involves adjusting the number of Pod replicas to meet demand. Precautions include ensuring adequate resources on nodes, understanding implications on load balancing, and monitoring potential performance bottlenecks. Achieving this is straightforward via the command `kubectl scale deployment.apps/<deployment-name> --replicas=<number>`. Ensure by verifying using `kubectl get deployment <deployment-name>` for the correct replica count .
To bind a ClusterRole to a ServiceAccount within a specific namespace, create a ClusterRole with the necessary permissions and a ServiceAccount in the desired namespace. Then, use a RoleBinding to associate the ClusterRole with the ServiceAccount in that namespace. For example, to allow a ServiceAccount named 'cicd-token' to create deployments, statefulsets, and daemonsets in the 'app-team1' namespace, create the ClusterRole and ServiceAccount, then bind them using: `kubectl create rolebinding cicd-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token` .
To upgrade Kubernetes control plane and node components on the master node, first ensure you are using the correct context with `kubectl config use-context`. Cordon and drain the master node using `kubectl cordon mk8s-master-1` and `kubectl drain mk8s-master-1 --delete-local-data --ignore-daemonsets --force`. Access the master node via SSH, and perform the upgrade by installing the new `kubeadm` version and applying the upgrade plan with `kubeadm upgrade apply`. Then upgrade `kubelet` and `kubectl` using `apt install`. After the upgrade, restart `kubelet` services, uncordon the master node with `kubectl uncordon`, and verify the upgrade with `kubectl get node` .
First, edit the existing Deployment to specify the container port under ports in the container spec section. For example, to expose port 80 for a Deployment using an nginx container, you add: `ports: [{containerPort: 80, name: http, protocol: TCP}]`. Then, expose the Deployment using the `kubectl expose` command, specifying the service type as NodePort: `kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort`. This will route traffic to the specified port on each node .
Draining a Kubernetes node before maintenance is crucial to safely migrate workloads and prevent disruption. It ensures that all managed Pods are evicted and rescheduled onto other nodes, preserving application availability. This process is done using `kubectl drain <node-name> --delete-local-data --ignore-daemonsets --force`, which safely evicts Pods excluding Daemonsets, so maintenance can proceed without impacting service continuity .
Node selectors in Kubernetes limit Pod scheduling to nodes with specific labels, allowing targeted resource allocation. By setting a node selector like `nodeSelector: disk: spinning`, the scheduler only places the Pod on nodes with the 'spinning' label. Challenges include decreased scheduling flexibility, potential resource bottlenecks, and increased complexity in managing node labels, which might lead to underutilized nodes or impact autoscaling dynamics .
For shared read access, create a PersistentVolume (PV) with access modes including `ReadOnlyMany`, specifying the `hostPath` if applicable for on-node storage. For example, a PV can be created with `hostPath: /srv/app-config` and capacity `1Gi`. Next, create a PersistentVolumeClaim (PVC) matching the PV's storage class and access modes. Validate by ensuring the PVC is bound to the PV using `kubectl get pvc` and checking the status. This setup is crucial for applications needing read-only shared filesystem access among multiple Pods .
To bring a 'NotReady' node back to 'Ready', first diagnose the issue by accessing the node and inspecting the status of services, especially kubelet, using `systemctl status kubelet`. If kubelet is not active, start it with `systemctl start kubelet`. To ensure it remains in a 'Ready' state even after rebooting, enable the kubelet service to start at boot using `systemctl enable kubelet`. This ensures the node is always configured to automatically start the necessary services required for a 'Ready' state .
Using a sidecar container for logging allows centralized management of log streams for an application Pod, facilitating integration with Kubernetes' logging architecture. Sidecars separate application logic from logging, enabling independent scaling and updates. To implement, add a busybox sidecar to the Pod spec running a command like `/bin/sh -c tail -n+1 -f /var/log/big-corp-app.log`, and share log files using a volume mount. This configuration is advantageous for simplifying log aggregation but requires monitoring of resource consumption and ensuring fault isolation .