Openshift Driver

Selecting the openshift driver adds the following options to the providers section of the configuration.

providers.[openshift]
Type: list

An Openshift provider’s resources are partitioned into groups called pool (see providers.[openshift].pools for details), and within a pool, the node types which are to be made available are listed (see providers.[openshift].pools.labels for details).

Note

For documentation purposes the option names are prefixed providers.[openshift] to disambiguate from other drivers, but [openshift] is not required in the configuration (e.g. below providers.[openshift].pools refers to the pools key in the providers section when the openshift driver is selected).

Example:

providers:
  - name: cluster
    driver: openshift
    context: context-name
    pools:
      - name: main
        labels:
          - name: openshift-project
            type: project
          - name: openshift-pod
            type: pod
            image: docker.io/fedora:28
providers.[openshift].context (required)

Name of the context configured in kube/config.

Before using the driver, Nodepool services need a kube/config file manually installed with self-provisioner (the service account needs to be able to create projects) context. Make sure the context is present in oc config get-contexts command output.

providers.[openshift].launch-retries
Default: 3

The number of times to retry launching a node before considering the job failed.

providers.[openshift].max-projects
Default: infinite
Type: int

An alias for max-servers. Note that using max-servers and max-projects at the same time in configuration will result in an error.

providers.[openshift].max-cores
Default: unlimited
Type: int

Maximum number of cores usable from this provider’s pools by default. This can be used to limit usage of the openshift backend. If not defined nodepool can use all cores up to the limit of the backend.

providers.[openshift].max-servers
Default: unlimited
Type: int

Maximum number of projects spawnable from this provider’s pools by default. This can be used to limit the number of projects. If not defined nodepool can create as many servers the openshift backend allows. Note that using max-servers and max-projects at the same time in configuration will result in an error.

providers.[openshift].max-ram
Default: unlimited
Type: int

Maximum ram usable from this provider’s pools by default. This can be used to limit the amount of ram allocated by nodepool. If not defined nodepool can use as much ram as the openshift backend allows.

providers.[openshift].max-resources
Default: unlimited
Type: dict

A dictionary of other quota resource limits applicable to this provider’s pools by default. Arbitrary limits may be supplied with the providers.[openshift].pools.labels.extra-resources attribute.

providers.[openshift].pools
Type: list

A pool defines a group of resources from an Openshift provider.

providers.[openshift].pools.name (required)

Project’s name are prefixed with the pool’s name.

providers.[openshift].pools.priority
Default: 100
Type: int

The priority of this provider pool (a lesser number is a higher priority). Nodepool launchers will yield requests to other provider pools with a higher priority as long as they are not paused. This means that in general, higher priority pools will reach quota first before lower priority pools begin to be used.

This setting may be specified at the provider level in order to apply to all pools within that provider, or it can be overridden here for a specific pool.

providers.[openshift].pools.node-attributes
Type: dict

A dictionary of key-value pairs that will be stored with the node data in ZooKeeper. The keys and values can be any arbitrary string.

providers.[openshift].pools.max-cores
Type: int

Maximum number of cores usable from this pool. This can be used to limit usage of the kubernetes backend. If not defined nodepool can use all cores up to the limit of the backend.

providers.[openshift].pools.max-servers
Type: int

Maximum number of pods spawnable from this pool. This can be used to limit the number of pods. If not defined nodepool can create as many servers the kubernetes backend allows.

providers.[openshift].pools.max-ram
Type: int

Maximum ram usable from this pool. This can be used to limit the amount of ram allocated by nodepool. If not defined nodepool can use as much ram as the kubernetes backend allows.

providers.[openshift].pools.max-resources
Default: unlimited
Type: dict

A dictionary of other quota resource limits applicable to this pool. Arbitrary limits may be supplied with the providers.[openshift].pools.labels.extra-resources attribute.

providers.[openshift].pools.default-label-cpu
Type: int

Only used by the pod label type; specifies specifies a default value for providers.[openshift].pools.labels.cpu for all labels of this pool that do not set their own value.

providers.[openshift].pools.default-label-memory
Type: int

Only used by the pod label type; specifies a default value in MiB for providers.[openshift].pools.labels.memory for all labels of this pool that do not set their own value.

providers.[openshift].pools.default-label-storage
Type: int

Only used by the pod label type; specifies a default value in MB for providers.[openshift].pools.labels.storage for all labels of this pool that do not set their own value.

providers.[openshift].pools.default-label-cpu-limit
Type: int

Only used by the pod label type; specifies specifies a default value for providers.[openshift].pools.labels.cpu-limit for all labels of this pool that do not set their own value.

providers.[openshift].pools.default-label-memory-limit
Type: int

Only used by the pod label type; specifies a default value in MiB for providers.[openshift].pools.labels.memory-limit for all labels of this pool that do not set their own value.

providers.[openshift].pools.default-label-storage-limit
Type: int

Only used by the pod label type; specifies a default value in MB for providers.[openshift].pools.labels.storage-limit for all labels of this pool that do not set their own value.

providers.[openshift].pools.labels
Type: list

Each entry in a pool`s labels section indicates that the corresponding label is available for use in this pool.

Each entry is a dictionary with the following keys

providers.[openshift].pools.labels.name (required)

Identifier for this label; references an entry in the labels section.

providers.[openshift].pools.labels.type

The Openshift provider supports two types of labels:

project

Project labels provide an empty project configured with a service account that can create pods, services, configmaps, etc.

pod

Pod labels provide a new dedicated project with a single pod created using the providers.[openshift].pools.labels.image parameter and it is configured with a service account that can exec and get the logs of the pod.

providers.[openshift].pools.labels.image

Only used by the pod label type; specifies the image name used by the pod.

providers.[openshift].pools.labels.image-pull
Default: IfNotPresent
Type: str

The ImagePullPolicy, can be IfNotPresent, Always or Never.

providers.[openshift].pools.labels.image-pull-secrets
Default: []
Type: list

The imagePullSecrets needed to pull container images from a private registry.

Example:

labels:
  - name: openshift-pod
    image: docker.io/fedora:28
    image-pull-secrets:
      - name: registry-secret
providers.[openshift].pools.labels.labels
Type: dict

A dictionary of additional values to be added to the namespace or pod metadata. The value of this field is added to the metadata.labels field in OpenShift. Note that this field contains arbitrary key/value pairs and is unrelated to the concept of labels in Nodepool.

providers.[openshift].pools.labels.dynamic-labels
Default: None
Type: dict

Similar to providers.[openshift].pools.labels.labels, but is interpreted as a format string with the following values available:

  • request: Information about the request which prompted the creation of this node (note that the node may ultimately be used for a different request and in that case this information will not be updated).

    • id: The request ID.

    • labels: The list of labels in the request.

    • requestor: The name of the requestor.

    • requestor_data: Key/value information from the requestor.

    • relative_priority: The relative priority of the request.

    • event_id: The external event ID of the request.

    • created_time: The creation time of the request.

    • tenant_name: The name of the tenant associated with the request.

For example:

labels:
  - name: pod-fedora
    dynamic-labels:
      request_info: "{request.id}"
providers.[openshift].pools.labels.annotations
Type: dict

A dictionary of additional values to be added to the pod metadata. The value of this field is added to the metadata.annotations field in OpenShift. This field contains arbitrary key/value pairs that can be accessed by tools and libraries. E.g custom schedulers can make use of this metadata.

providers.[openshift].pools.labels.python-path
Default: auto
Type: str

The path of the default python interpreter. Used by Zuul to set ansible_python_interpreter. The special value auto will direct Zuul to use inbuilt Ansible logic to select the interpreter on Ansible >=2.8, and default to /usr/bin/python2 for earlier versions.

providers.[openshift].pools.labels.shell-type
Default: sh
Type: str

The shell type of the node’s default shell executable. Used by Zuul to set ansible_shell_type. This setting should only be used

  • For a windows image with the experimental connection-type ssh, in which case cmd or powershell should be set and reflect the node’s DefaultShell configuration.

  • If the default shell is not Bourne compatible (sh), but instead e.g. csh or fish, and the user is aware that there is a long-standing issue with ansible_shell_type in combination with become

providers.[openshift].pools.labels.cpu
Type: int

Only used by the pod label type; specifies the number of cpu to request for the pod. If no limit is specified, this will also be used as the limit.

providers.[openshift].pools.labels.memory
Type: int

Only used by the pod label type; specifies the amount of memory in MiB to request for the pod. If no limit is specified, this will also be used as the limit.

providers.[openshift].pools.labels.storage
Type: int

Only used by the pod label type; specifies the amount of ephemeral-storage in MB to request for the pod. If no limit is specified, this will also be used as the limit.

providers.[openshift].pools.labels.extra-resources
Type: dict

Only used by the pod label type; specifies any extra resources that Nodepool should consider in its quota calculation other than the resources described above (cpu, memory, storage).

providers.[openshift].pools.labels.cpu-limit
Type: int

Only used by the pod label type; specifies the cpu limit for the pod.

providers.[openshift].pools.labels.memory-limit
Type: int

Only used by the pod label type; specifies the memory limit in MiB for the pod.

providers.[openshift].pools.labels.storage-limit
Type: int

Only used by the pod label type; specifies the ephemeral-storage limit in MB for the pod.

providers.[openshift].pools.labels.gpu
Type: float

Only used by the pod label type; specifies the amount of gpu allocated to the pod. This will be used to set both requests and limits to the same value, based on how kubernetes assigns gpu resources: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/.

providers.[openshift].pools.labels.gpu-resource
Type: str

Only used by the pod label type; specifies the custom schedulable resource associated with the installed gpu that is available in the cluster.

providers.[openshift].pools.labels.env
Default: []
Type: list

Only used by the pod label type; A list of environment variables to pass to the Pod.

providers.[openshift].pools.labels.env.name (required)
Type: str

The name of the environment variable passed to the Pod.

providers.[openshift].pools.labels.env.value (required)
Type: str

The value of the environment variable passed to the Pod.

providers.[openshift].pools.labels.node-selector
Type: dict

Only used by the pod label type; A map of key-value pairs to ensure the OpenShift scheduler places the Pod on a node with specific node labels.

providers.[openshift].pools.labels.scheduler-name
Type: str

Only used by the pod label type. Sets the schedulerName field on the container. Normally left unset for the OpenShift default.

providers.[openshift].pools.labels.privileged
Type: bool

Only used by the pod label type. Sets the securityContext.privileged flag on the container. Normally left unset for the OpenShift default.

providers.[openshift].pools.labels.volumes
Type: list

Only used by the pod label type. Sets the volumes field on the pod. If supplied, this should be a list of OpenShift Pod Volume definitions.

providers.[openshift].pools.labels.volume-mounts
Type: list

Only used by the pod label type. Sets the volumeMounts flag on the container. If supplied, this should be a list of OpenShift Container VolumeMount definitions.

providers.[openshift].pools.labels.spec
Type: dict

This attribute is exclusive with all other label attributes except providers.[openshift].pools.labels.name and providers.[openshift].pools.labels.type, providers.[openshift].pools.labels.annotations, providers.[openshift].pools.labels.labels and providers.[openshift].pools.labels.dynamic-labels. If a spec is provided, then Nodepool will supply the contents of this value verbatim to OpenShift as the spec attribute of the OpenShift Pod definition. No other Nodepool attributes are used, including any default values set at the provider level (such as default-label-cpu and similar).

This attribute allows for the creation of arbitrary complex pod definitions but the user is responsible for ensuring that they are suitable. The first container in the pod is expected to be a long-running container that hosts a shell environment for running commands. The following minimal definition matches what Nodepool itself normally creates and is recommended as a starting point:

labels:
  - name: custom-pod
    type: pod
    spec:
      containers:
        - name: custom-pod
          image: ubuntu:jammy
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh", "-c"]
          args: ["while true; do sleep 30; done;"]