Extending Kubernetes - Kubernetes
Extending Kubernetes - Kubernetes
Extending Kubernetes
Different ways to change the behavior of your Kubernetes cluster.
Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or
submit patches to the Kubernetes project code.
This guide describes the options for customizing a Kubernetes cluster. It is aimed at
cluster operators who want to understand how to adapt their Kubernetes cluster to the needs
of their work environment. Developers who are prospective Platform Developers or
Kubernetes Project Contributors will also find it useful as an introduction to what extension
points and patterns exist, and their trade-offs and limitations.
Customization approaches can be broadly divided into configuration, which only involves
changing command line arguments, local configuration files, or API resources; and extensions,
which involve running additional programs, additional network services, or both. This
document is primarily about extensions.
Configuration
Configuration files and command arguments are documented in the Reference section of the
online documentation, with a page for each binary:
kube-apiserver
kube-controller-manager
kube-scheduler
kubelet
kube-proxy
Command arguments and configuration files may not always be changeable in a hosted
Kubernetes service or a distribution with managed installation. When they are changeable,
they are usually only changeable by the cluster operator. Also, they are subject to change in
future Kubernetes versions, and setting them may require restarting processes. For those
reasons, they should be used only when there are no other options.
Built-in policy APIs, such as ResourceQuota, NetworkPolicy and Role-based Access Control
(RBAC), are built-in Kubernetes APIs that provide declaratively configured policy settings. APIs
are typically usable even with hosted Kubernetes services and with managed Kubernetes
installations. The built-in policy APIs follow the same conventions as other Kubernetes
resources such as Pods. When you use a policy APIs that is stable, you benefit from a defined
support policy like other Kubernetes APIs. For these reasons, policy APIs are recommended
over configuration files and command arguments where suitable.
Extensions
Extensions are software components that extend and deeply integrate with Kubernetes. They
adapt it to support new types and new kinds of hardware.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 1/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Extension patterns
Kubernetes is designed to be automated by writing client programs. Any program that reads
and/or writes to the Kubernetes API can provide useful automation. Automation can run on
the cluster or off it. By following the guidance in this doc you can write highly available and
robust automation. Automation generally works with any Kubernetes cluster, including
hosted clusters and managed installations.
There is a specific pattern for writing client programs that work well with Kubernetes called
the controller pattern. Controllers typically read an object's .spec , possibly do things, and
then update the object's .status .
A controller is a client of the Kubernetes API. When Kubernetes is the client and calls out to a
remote service, Kubernetes calls this a webhook. The remote service is called a webhook
backend. As with custom controllers, webhooks do add a point of failure.
Note: Outside of Kubernetes, the term “webhook” typically refers to a mechanism for
asynchronous notifications, where the webhook call serves as a one-way notification to
another system or component. In the Kubernetes ecosystem, even synchronous HTTP
callouts are often described as “webhooks”.
In the webhook model, Kubernetes makes a network request to a remote service. With the
alternative binary Plugin model, Kubernetes executes a binary (program). Binary plugins are
used by the kubelet (for example, CSI storage plugins and CNI network plugins), and by
kubectl (see Extend kubectl with plugins).
Extension points
This diagram shows the extension points in a Kubernetes cluster and the clients that access it.
2. The API server handles all requests. Several types of extension points in the API server
allow authenticating requests, or blocking them based on their content, editing content,
and handling deletion. These are described in the API Access Extensions section.
3. The API server serves various kinds of resources. Built-in resource kinds, such as pods ,
are defined by the Kubernetes project and can't be changed. Read API extensions to
learn about extending the Kubernetes API.
4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways
to extend scheduling, which are described in the Scheduling extensions section.
6. The kubelet runs on servers (nodes), and helps pods appear like virtual servers with
their own IPs on the cluster network. Network Plugins allow for different
implementations of pod networking.
7. You can use Device Plugins to integrate custom hardware or other special node-local
facilities, and make these available to Pods running in your cluster. The kubelet includes
support for working with device plugins.
The kubelet also mounts and unmounts volume for pods and their containers. You can
use Storage Plugins to add support for new kinds of storage and other volume types.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 3/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Client extensions
Plugins for kubectl are separate binaries that add or replace the behavior of specific
subcommands. The kubectl tool can also integrate with credential plugins These extensions
only affect a individual user's local environment, and so cannot enforce site-wide policies.
If you want to extend the kubectl tool, read Extend kubectl with plugins.
API extensions
Custom resource definitions
Consider adding a Custom Resource to Kubernetes if you want to define new controllers,
application configuration objects or other declarative APIs, and to manage them using
Kubernetes tools, such as kubectl .
For more about Custom Resources, see the Custom Resources concept guide.
You can also make your own custom APIs and control loops that manage other resources,
such as storage, or to define policies (such as an access control restriction).
Each of the steps in the Kubernetes authentication / authorization flow offers extension
points.
Authentication
Authentication maps headers or certificates in all requests to a username for the client
making the request.
Kubernetes has several built-in authentication methods that it supports. It can also sit behind
an authenticating proxy, and it can send a token from an Authorization: header to a remote
service for verification (an authentication webhook) if those don't meet your needs.
Authorization
Authorization determines whether specific users can read, write, and do other operations on
API resources. It works at the level of whole resources -- it doesn't discriminate based on
arbitrary object fields.
If the built-in authorization options don't meet your needs, an authorization webhook allows
calling out to custom code that makes an authorization decision.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 4/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
The Image Policy webhook restricts what images can be run in containers.
To make arbitrary admission control decisions, a general Admission webhook can be
used. Admission webhooks can reject creations or updates. Some admission webhooks
modify the incoming request data before it is handled further by Kubernetes.
Infrastructure extensions
Device plugins
Device plugins allow a node to discover new Node resources (in addition to the builtin ones like
cpu and memory) via a Device Plugin.
Storage plugins
Container Storage Interface (CSI) plugins provide a way to extend Kubernetes with supports
for new kinds of volumes. The volumes can be backed by durable external storage, or provide
ephemeral storage, or they might offer a read-only interface to information using a filesystem
paradigm.
Kubernetes also includes support for FlexVolume plugins, which are deprecated since
Kubernetes v1.23 (in favour of CSI).
FlexVolume plugins allow users to mount volume types that aren't natively supported by
Kubernetes. When you run a Pod that relies on FlexVolume storage, the kubelet calls a binary
plugin to mount the volume. The archived FlexVolume design proposal has more detail on this
approach.
The Kubernetes Volume Plugin FAQ for Storage Vendors includes general information on
storage plugins.
Network plugins
Your Kubernetes cluster needs a network plugin in order to have a working Pod network and
to support other aspects of the Kubernetes network model.
Network Plugins allow Kubernetes to work with different networking topologies and
technologies.
Kubelet image credential providers are plugins for the kubelet to dynamically retrieve image
registry credentials. The credentials are then used when pulling images from container image
registries that match the configuration.
The plugins can communicate with external services or use local files to obtain credentials.
This way, the kubelet does not need to have static credentials for each registry, and can
support various authentication methods and protocols.
For plugin configuration details, see Configure a kubelet image credential provider.
Scheduling extensions
The scheduler is a special type of controller that watches pods, and assigns pods to nodes.
The default scheduler can be replaced entirely, while continuing to use other Kubernetes
components, or multiple schedulers can run at the same time.
This is a significant undertaking, and almost all Kubernetes users find they do not need to
modify the scheduler.
You can control which scheduling plugins are active, or associate sets of plugins with different
named scheduler profiles. You can also write your own plugin that integrates with one or
more of the kube-scheduler's extension points.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 5/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Finally, the built in kube-scheduler component supports a webhook that permits a remote
HTTP backend (scheduler extension) to filter and / or prioritize the nodes that the kube-
scheduler chooses for a pod.
Note: You can only affect node filtering and node prioritization with a scheduler extender
webhook; other extension points are not available through the webhook integration.
What's next
Learn more about infrastructure extensions
Device Plugins
Network Plugins
CSI storage plugins
Learn about kubectl plugins
Learn more about Custom Resources
Learn more about Extension API Servers
Learn about Dynamic admission control
Learn about the Operator pattern
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 6/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Container Storage Interface (CSI) plugins provide a way to extend Kubernetes with
supports for new kinds of volumes. The volumes can be backed by durable external
storage, or provide ephemeral storage, or they might offer a read-only interface to
information using a filesystem paradigm.
Kubernetes also includes support for FlexVolume plugins, which are deprecated since
Kubernetes v1.23 (in favour of CSI).
FlexVolume plugins allow users to mount volume types that aren't natively supported by
Kubernetes. When you run a Pod that relies on FlexVolume storage, the kubelet calls a
binary plugin to mount the volume. The archived FlexVolume design proposal has more
detail on this approach.
The Kubernetes Volume Plugin FAQ for Storage Vendors includes general information on
storage plugins.
Device plugins
Device plugins allow a node to discover new Node facilities (in addition to the built-in
node resources such as cpu and memory ), and provide these custom node-local
facilities to Pods that request them.
Network plugins
A network plugin allow Kubernetes to work with different networking topologies and
technologies. Your Kubernetes cluster needs a network plugin in order to have a working
Pod network and to support other aspects of the Kubernetes network model.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 7/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
You must use a CNI plugin that is compatible with the v0.4.0 or later releases of the CNI
specification. The Kubernetes project recommends using a plugin that is compatible with the
v1.0.0 CNI specification (plugins can be compatible with multiple spec versions).
Installation
A Container Runtime, in the networking context, is a daemon on a node configured to provide
CRI Services for kubelet. In particular, the Container Runtime must be configured to load the
CNI plugins required to implement the Kubernetes network model.
Note:
Prior to Kubernetes 1.24, the CNI plugins could also be managed by the kubelet using the
cni-bin-dir and network-plugin command-line parameters. These command-line
parameters were removed in Kubernetes 1.24, with management of the CNI no longer in
scope for kubelet.
See Troubleshooting CNI plugin-related errors if you are facing issues following the
removal of dockershim.
For specific information about how a Container Runtime manages the CNI plugins, see the
documentation for that Container Runtime, for example:
containerd
CRI-O
For specific information about how to install and manage a CNI plugin, see the documentation
for that plugin or networking provider.
By default, if no kubelet network plugin is specified, the noop plugin is used, which sets
net/bridge/bridge-nf-call-iptables=1 to ensure simple configurations (like Docker with a
bridge) work correctly with the iptables proxy.
Loopback CNI
In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network
model, Kubernetes also requires the container runtimes to provide a loopback interface lo ,
which is used for each sandbox (pod sandboxes, vm sandboxes, ...). Implementing the
loopback interface can be accomplished by re-using the CNI loopback plugin. or by developing
your own code to achieve this (see this example from CRI-O).
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 8/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Support hostPort
The CNI networking plugin supports hostPort . You can use the official portmap plugin
offered by the CNI plugin team or use your own plugin with portMapping functionality.
If you want to enable hostPort support, you must specify portMappings capability in your
cni-conf-dir . For example:
{
"name": "k8s-pod-network",
"cniVersion": "0.4.0",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "127.0.0.1",
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"externalSetMarkChain": "KUBE-MARK-MASQ"
}
]
}
The CNI networking plugin also supports pod ingress and egress traffic shaping. You can use
the official bandwidth plugin offered by the CNI plugin team or use your own plugin with
bandwidth control functionality.
If you want to enable traffic shaping support, you must add the bandwidth plugin to your CNI
configuration file (default /etc/cni/net.d ) and ensure that the binary is included in your CNI
bin dir (default /opt/cni/bin ).
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 9/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
{
"name": "k8s-pod-network",
"cniVersion": "0.4.0",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "127.0.0.1",
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/ingress-bandwidth: 1M
kubernetes.io/egress-bandwidth: 1M
...
What's next
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 10/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Kubernetes provides a device plugin framework that you can use to advertise system
hardware resources to the Kubelet.
Instead of customizing the code for Kubernetes itself, vendors can implement a device plugin
that you deploy either manually or as a DaemonSet. The targeted devices include GPUs, high-
performance NICs, FPGAs, InfiniBand adapters, and other similar computing resources that
may require vendor specific initialization and setup.
service Registration {
rpc Register(RegisterRequest) returns (Empty) {}
}
A device plugin can register itself with the kubelet through this gRPC service. During the
registration, the device plugin needs to send:
Following a successful registration, the device plugin sends the kubelet the list of devices it
manages, and the kubelet is then in charge of advertising those resources to the API server as
part of the kubelet node status update. For example, after a device plugin registers hardware-
vendor.example/foo with the kubelet and reports two healthy devices on a node, the node
status is updated to advertise that the node has 2 "Foo" devices installed and available.
Then, users can request devices as part of a Pod specification (see container ). Requesting
extended resources is similar to how you manage requests and limits for other resources,
with the following differences:
Example
Suppose a Kubernetes cluster is running a device plugin that advertises resource hardware-
vendor.example/foo on certain nodes. Here is an example of a pod requesting this resource
to run a demo workload:
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 11/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
---
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo-container-1
image: registry.k8s.io/pause:2.0
resources:
limits:
hardware-vendor.example/foo: 2
#
# This Pod needs 2 of the hardware-vendor.example/foo devices
# and can only schedule onto a Node that's able to satisfy
# that need.
#
# If the Node has more than 2 of those devices available, the
# remainder would be available for other Pods to use.
1. Initialization. During this phase, the device plugin performs vendor-specific initialization
and setup to make sure the devices are in a ready state.
2. The plugin starts a gRPC service, with a Unix socket under the host path
/var/lib/kubelet/device-plugins/ , that implements the following interfaces:
service DevicePlugin {
// GetDevicePluginOptions returns options to be communicated with Devic
rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {}
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 12/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
3. The plugin registers itself with the kubelet through the Unix socket at host path
/var/lib/kubelet/device-plugins/kubelet.sock .
Note: The ordering of the workflow is important. A plugin MUST start serving gRPC
service before registering itself with kubelet for successful registration.
4. After successfully registering itself, the device plugin runs in serving mode, during which
it keeps monitoring device health and reports back to the kubelet upon any device state
changes. It is also responsible for serving Allocate gRPC requests. During Allocate ,
the device plugin may do device-specific preparation; for example, GPU cleanup or
QRNG initialization. If the operations succeed, the device plugin returns an
AllocateResponse that contains container runtime configurations for accessing the
allocated devices. The kubelet passes this information to the container runtime.
If you choose the DaemonSet approach you can rely on Kubernetes to: place the device
plugin's Pod onto Nodes, to restart the daemon Pod after failure, and to help automate
upgrades.
API compatibility
Previously, the versioning scheme required the Device Plugin's API version to match exactly
the Kubelet's version. Since the graduation of this feature to Beta in v1.12 this is no longer a
hard requirement. The API is versioned and has been stable since Beta graduation of this
feature. Because of this, kubelet upgrades should be seamless but there still may be changes
in the API before stabilization making upgrades not guaranteed to be non-breaking.
To run device plugins on nodes that need to be upgraded to a Kubernetes release with a
newer device plugin API version, upgrade your device plugins to support both versions before
upgrading these nodes. Taking that approach will ensure the continuous functioning of the
device allocations during the upgrade.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 13/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
In order to monitor resources provided by device plugins, monitoring agents need to be able
to discover the set of devices that are in-use on the node and obtain metadata to describe
which container the metric should be associated with. Prometheus metrics exposed by device
monitoring agents should follow the Kubernetes Instrumentation Guidelines, identifying
containers using pod , namespace , and container prometheus labels.
The kubelet provides a gRPC service to enable discovery of in-use devices, and to provide
metadata for these devices:
Starting from Kubernetes v1.27, the List endpoint can provide information on resources of
running pods allocated in ResourceClaims by the DynamicResourceAllocation API. To enable
this feature kubelet must be started with the following flags:
--feature-gates=DynamicResourceAllocation=true,KubeletPodResourcesDynamiceResourc
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 14/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Note:
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 15/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Note:
GetAllocatableResources should only be used to evaluate allocatable resources on a
node. If the goal is to evaluate free/unallocated resources it should be used in
conjunction with the List() endpoint. The result obtained by GetAllocatableResources
would remain the same unless the underlying resources exposed to kubelet change. This
happens rarely but when it does (for example: hotplug/hotunplug, device health
changes), client is expected to call GetAlloctableResources endpoint.
Starting from Kubernetes v1.23, the GetAllocatableResources is enabled by default. You can
disable it by turning off the KubeletPodResourcesGetAllocatable feature gate.
Preceding Kubernetes v1.23, to enable this feature kubelet must be started with the
following flag:
--feature-gates=KubeletPodResourcesGetAllocatable=true
ContainerDevices do expose the topology information declaring to which NUMA cells the
device is affine. The NUMA cells are identified using a opaque integer ID, which value is
consistent to what device plugins report when they register themselves to the kubelet.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 16/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
To enable this feature, you must start your kubelet services with the following flag:
--feature-gates=KubeletPodResourcesGet=true
The Get endpoint can provide Pod information related to dynamic resources allocated by the
dynamic resource allocation API. To enable this feature, you must ensure your kubelet
services are started with the following flags:
--feature-gates=KubeletPodResourcesGet=true,DynamicResourceAllocation=true,Kubele
message TopologyInfo {
repeated NUMANode nodes = 1;
}
message NUMANode {
int64 ID = 1;
}
Device Plugins that wish to leverage the Topology Manager can send back a populated
TopologyInfo struct as part of the device registration, along with the device IDs and the health
of the device. The device manager will then use this information to consult with the Topology
Manager and make resource assignment decisions.
TopologyInfosupports setting a nodes field to either nil or a list of NUMA nodes. This
allows the Device Plugin to advertise a device that spans multiple NUMA nodes.
Setting TopologyInfo to nil or providing an empty list of NUMA nodes for a given device
indicates that the Device Plugin does not have a NUMA affinity preference for that device.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 17/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
What's next
Learn about scheduling GPU resources using device plugins
Learn about advertising extended resources on a node
Learn about the Topology Manager
Read about using hardware acceleration for TLS ingress with Kubernetes
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 18/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Custom resources
A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a
certain kind; for example, the built-in pods resource contains a collection of Pod objects.
A custom resource is an extension of the Kubernetes API that is not necessarily available in a
default Kubernetes installation. It represents a customization of a particular Kubernetes
installation. However, many core Kubernetes functions are now built using custom resources,
making Kubernetes more modular.
Custom resources can appear and disappear in a running cluster through dynamic
registration, and cluster admins can update custom resources independently of the cluster
itself. Once a custom resource is installed, users can create and access its objects using
kubectl, just as they do for built-in resources like Pods.
Custom controllers
On their own, custom resources let you store and retrieve structured data. When you
combine a custom resource with a custom controller, custom resources provide a true
declarative API.
The Kubernetes declarative API enforces a separation of responsibilities. You declare the
desired state of your resource. The Kubernetes controller keeps the current state of
Kubernetes objects in sync with your declared desired state. This is in contrast to an
imperative API, where you instruct a server what to do.
You can deploy and update a custom controller on a running cluster, independently of the
cluster's lifecycle. Custom controllers can work with any kind of resource, but they are
especially effective when combined with custom resources. The Operator pattern combines
custom resources and custom controllers. You can use custom controllers to encode domain
knowledge for specific applications into an extension of the Kubernetes API.
You want your new types to be readable and kubectl support is not required
writable using kubectl .
You want to view your new types in a Kubernetes UI, Kubernetes UI support is not
such as dashboard, alongside built-in types. required.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 19/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
You are developing a new API. You already have a program that
serves your API and works well.
You are willing to accept the format restriction that You need to have specific REST
Kubernetes puts on REST resource paths, such as paths to be compatible with an
API Groups and Namespaces. (See the API already defined REST API.
Overview.)
You want to reuse Kubernetes API support features. You don't need those features.
Declarative APIs
In a Declarative API, typically:
Your API consists of a relatively small number of relatively small objects (resources).
The objects define configuration of applications or infrastructure.
The objects are updated relatively infrequently.
Humans often need to read and write the objects.
The main operations on the objects are CRUD-y (creating, reading, updating and
deleting).
Transactions across objects are not required: the API represents a desired state, not an
exact state.
Imperative APIs are not declarative. Signs that your API might not be declarative include:
The client says "do this", and then gets a synchronous response back when it is done.
The client says "do this", and then gets an operation ID back, and has to check a
separate Operation object to determine completion of the request.
You talk about Remote Procedure Calls (RPCs).
Directly storing large amounts of data; for example, > a few kB per object, or > 1000s of
objects.
High bandwidth access (10s of requests per second sustained) needed.
Store end-user data (such as images, PII, etc.) or other large-scale data processed by
applications.
The natural operations on the objects are not CRUD-y.
The API is not easily modeled as objects.
You chose to represent pending operations with an operation ID or an operation object.
You want to put the entire configuration into one key of a ConfigMap.
The main use of the configuration file is for a program running in a Pod on your cluster
to consume the file to configure itself.
Consumers of the file prefer to consume via file in a Pod or environment variable in a
pod, rather than the Kubernetes API.
You want to perform rolling updates via Deployment, etc., when the file is updated.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 20/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Note: Use a Secret for sensitive data, which is similar to a ConfigMap but more secure.
Use a custom resource (CRD or Aggregated API) if most of the following apply:
You want to use Kubernetes client libraries and CLIs to create and update the new
resource.
You want top-level support from kubectl ; for example, kubectl get my-object
object-name .
You want to build new automation that watches for updates on the new object, and then
CRUD other objects, or vice versa.
You want to write automation that handles updates to the object.
You want to use Kubernetes API conventions like .spec , .status , and .metadata .
You want the object to be an abstraction over a collection of controlled resources, or a
summarization of other resources.
Kubernetes provides these two options to meet the needs of different users, so that neither
ease of use nor flexibility is compromised.
Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts
as a proxy. This arrangement is called API Aggregation(AA). To users, the Kubernetes API
appears extended.
CRDs allow users to create new types of resources without adding another API server. You do
not need to understand API Aggregation to use CRDs.
Regardless of how they are installed, the new resources are referred to as Custom Resources
to distinguish them from built-in Kubernetes resources (like pods).
Note:
Avoid using a Custom Resource as data storage for application, end user, or monitoring
data: architecture designs that store application data within the Kubernetes API typically
represent a design that is too closely coupled.
CustomResourceDefinitions
The CustomResourceDefinition API resource allows you to define custom resources. Defining
a CRD object creates a new custom resource with a name and schema that you specify. The
Kubernetes API serves and handles the storage of your custom resource. The name of a CRD
object must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom resource, but the
generic nature of the implementation means you have less flexibility than with API server
aggregation.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 21/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Refer to the custom controller example for an example of how to register a new custom
resource, work with instances of your new resource type, and use a controller to handle
events.
The aggregation layer allows you to provide specialized implementations for your custom
resources by writing and deploying your own API server. The main API server delegates
requests to your API server for the custom resources that you handle, making them available
to all of its clients.
No additional service to run; CRDs are An additional service to create and that
handled by API server. could fail.
No ongoing support once the CRD is created. May need to periodically pickup bug fixes
Any bug fixes are picked up as part of normal from upstream and rebuild and update
Kubernetes Master upgrades. the Aggregated API server.
No need to handle multiple versions of your You need to handle multiple versions of
API; for example, when you control the client your API; for example, when developing
for this resource, you can upgrade it in sync an extension to share with the world.
with the API.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 22/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Aggregated
Feature Description CRDs API
Validatio Help users prevent errors and allow Yes. Most validation Yes, arbitrary
n you to evolve your API can be specified in validation
independently of your clients. These the CRD using checks
features are most useful when there OpenAPI v3.0
are many clients who can't all validation. Any other
update at the same time. validations
supported by
addition of a
Validating Webhook.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 23/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Aggregated
Feature Description CRDs API
Common Features
When you create a custom resource, either via a CRD or an AA, you get many features for
your API, compared to implementing it outside the Kubernetes platform:
CRUD The new endpoints support CRUD basic operations via HTTP and
kubectl
Watch The new endpoints support Kubernetes Watch operations via HTTP
Built-in Access to the extension uses the core API server (aggregation layer)
Authentication for authentication
Built-in Access to the extension can reuse the authorization used by the
Authorization core API server; for example, RBAC.
Admission Set default values and validate extension resources during any
Webhooks create/update/delete operation.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 24/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
Unset versus Clients can distinguish unset fields from zero-valued fields.
Empty
Labels and Common metadata across objects that tools know how to edit for
annotations core and custom resources.
Storage
Custom resources consume storage space in the same way that ConfigMaps do. Creating too
many custom resources may overload your API server's storage space.
Aggregated API servers may use the same storage as the main API server, in which case the
same warning applies.
If you use RBAC for authorization, most RBAC roles will not grant access to the new resources
(except the cluster-admin role or any role created with wildcard rules). You'll need to explicitly
grant access to the new resources. CRDs and Aggregated APIs often come bundled with new
role definitions for the types they add.
Aggregated API servers may or may not use the same authentication, authorization, and
auditing as the primary API server.
kubectl
What's next
Learn how to Extend the Kubernetes API with the aggregation layer.
Learn how to Extend the Kubernetes API with CustomResourceDefinition.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 26/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
The aggregation layer is different from Custom Resources, which are a way to make the
kube-apiserver recognise new kinds of object.
Aggregation layer
The aggregation layer runs in-process with the kube-apiserver. Until an extension resource is
registered, the aggregation layer will do nothing. To register an API, you add an APIService
object, which "claims" the URL path in the Kubernetes API. At that point, the aggregation layer
will proxy anything sent to that API path (e.g. /apis/myextension.mycompany.io/v1/… ) to the
registered APIService.
The most common way to implement the APIService is to run an extension API server in Pod(s)
that run in your cluster. If you're using the extension API server to manage resources in your
cluster, the extension API server (also written as "extension-apiserver") is typically paired with
one or more controllers. The apiserver-builder library provides a skeleton for both extension
API servers and the associated controller(s).
Response latency
Extension API servers should have low latency networking to and from the kube-apiserver.
Discovery requests are required to round-trip from the kube-apiserver in five seconds or less.
If your extension API server cannot achieve that latency requirement, consider making
changes that let you meet it.
What's next
To get the aggregator working in your environment, configure the aggregation layer.
Then, setup an extension api-server to work with the aggregation layer.
Read about APIService in the API reference
Alternatively: learn how to extend the Kubernetes API using Custom Resource Definitions.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 27/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
3 - Operator pattern
Operators are software extensions to Kubernetes that make use of custom resources to
manage applications and their components. Operators follow Kubernetes principles, notably
the control loop.
Motivation
The operator pattern aims to capture the key aim of a human operator who is managing a
service or set of services. Human operators who look after specific applications and services
have deep knowledge of how the system ought to behave, how to deploy it, and how to react
if there are problems.
People who run workloads on Kubernetes often like to use automation to take care of
repeatable tasks. The operator pattern captures how you can write code to automate a task
beyond what Kubernetes itself provides.
Operators in Kubernetes
Kubernetes is designed for automation. Out of the box, you get lots of built-in automation
from the core of Kubernetes. You can use Kubernetes to automate deploying and running
workloads, and you can automate how Kubernetes does that.
Kubernetes' operator pattern concept lets you extend the cluster's behaviour without
modifying the code of Kubernetes itself by linking controllers to one or more custom
resources. Operators are clients of the Kubernetes API that act as controllers for a Custom
Resource.
An example operator
Some of the things that you can use an operator to automate include:
1. A custom resource named SampleDB, that you can configure into the cluster.
2. A Deployment that makes sure a Pod is running that contains the controller part of the
operator.
3. A container image of the operator code.
4. Controller code that queries the control plane to find out what SampleDB resources are
configured.
5. The core of the operator is code to tell the API server how to make reality match the
configured resources.
If you add a new SampleDB, the operator sets up PersistentVolumeClaims to
provide durable database storage, a StatefulSet to run SampleDB and a Job to
handle initial configuration.
If you delete it, the operator takes a snapshot, then makes sure that the StatefulSet
and Volumes are also removed.
6. The operator also manages regular database backups. For each SampleDB resource, the
operator determines when to create a Pod that can connect to the database and take
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 28/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
backups. These Pods would rely on a ConfigMap and / or a Secret that has database
connection details and credentials.
7. Because the operator aims to provide robust automation for the resource it manages,
there would be additional supporting code. For this example, code checks to see if the
database is running an old version and, if so, creates Job objects that upgrade it for you.
Deploying operators
The most common way to deploy an operator is to add the Custom Resource Definition and
its associated Controller to your cluster. The Controller will normally run outside of the
control plane, much as you would run any containerized application. For example, you can
run the controller in your cluster as a Deployment.
Using an operator
Once you have an operator deployed, you'd use it by adding, modifying or deleting the kind of
resource that the operator uses. Following the above example, you would set up a
Deployment for the operator itself, and then:
…and that's it! The operator will take care of applying the changes as well as keeping the
existing service in good shape.
You also implement an operator (that is, a Controller) using any language / runtime that can
act as a client for the Kubernetes API.
Following are a few libraries and tools you can use to write your own cloud native operator.
Note: This section links to third party projects that provide functionality required by
Kubernetes. The Kubernetes project authors aren't responsible for these projects, which
are listed alphabetically. To add a project to this list, read the content guide before
submitting a change. More information.
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 29/30
6/6/23, 4:04 PM Extending Kubernetes | Kubernetes
What's next
Read the CNCF Operator White Paper.
Learn more about Custom Resources
Find ready-made operators on OperatorHub.io to suit your use case
Publish your operator for other people to use
Read CoreOS' original article that introduced the operator pattern (this is an archived
version of the original article).
Read an article from Google Cloud about best practices for building operators
https://kubernetes.io/docs/concepts/extend-kubernetes/_print/ 30/30