Xelon Kubernetes Service (XKS)

Xelon Kubernetes Service (XKS) is the managed Kubernetes offering on Xelon HQ. XKS provides conformant upstream Kubernetes with a managed control plane, automated node lifecycle, and native integrations for storage, networking, and load balancing.

XKS clusters run on Talos Linux — an immutable, minimal operating system purpose-built for Kubernetes. Talos eliminates SSH access and traditional package managers, reducing your attack surface and simplifying operations. Talos is the underlying OS; the managed Kubernetes service itself is XKS.

Why an immutable OS?

Talos Linux is fully API-managed, immutable, and contains only the components required to run Kubernetes. This makes XKS clusters more secure, consistent, and easier to upgrade.

Creating an XKS Cluster

Navigate to Kubernetes > Kubernetes and click Create Cluster.

Field Description
Cluster Name A unique name for your cluster. Use lowercase letters, numbers, and hyphens.
Select Tenant The tenant that owns the cluster.
Select Cloud The cloud location for the cluster infrastructure.
Talos Version Select the Talos Linux version to use for the cluster nodes.
Kubernetes Version Select the Kubernetes version. Must be compatible with the chosen Talos version.
Schematic ID Optional. A unique identifier for a specific Talos OS image with built-in components from the Talos Factory.
Dedicated Network

Each Kubernetes cluster is automatically provisioned with a dedicated /29 WAN subnet. This gives the cluster its own static public IP addresses for the API endpoint, load balancers, and virtual IPs. You do not need to reserve IPs manually — the entire subnet is dedicated to the cluster for its lifetime.

Control Plane Configuration

Configure the control plane nodes that manage the cluster state and API server:

Parameter Description Recommendation
CPU Cores Virtual CPU cores per control plane node. 2+ cores
RAM Memory per control plane node in GB. 4+ GB
Disk Size Storage per control plane node in GB. 50+ GB
Node Count Number of control plane nodes. Use odd numbers for quorum. 3 for HA

Worker Node Configuration

Worker nodes run your application workloads. Configure resources based on your expected workload requirements:

Parameter Description Minimum
CPU Cores Virtual CPU cores per worker node. 2 cores
RAM Memory per worker node in GB. 2 GB
Disk Size Storage per worker node in GB. 20 GB
Node Count Number of worker nodes. 1 node

Click Create to start provisioning. XKS clusters typically take a few minutes to become ready.

What's Included vs. What You Manage

Xelon HQ provisions a production-ready, conformant Kubernetes cluster with a baseline of pre-installed components. Everything beyond the baseline is fully under your control — XKS clusters are standard upstream Kubernetes, so any workload, CRD, operator, or add-on you can run on vanilla Kubernetes will run here.

Pre-Installed by Xelon

Every XKS cluster is bootstrapped with the following components, managed by Xelon:

Component Purpose
Cilium Container Network Interface (CNI) providing pod networking, network policies, and observability. Cilium also acts as the default ingress controller on every XKS cluster and provides the cilium IngressClass.
Metrics Server Serves resource metrics (CPU, memory) on the metrics.k8s.io API, enabling kubectl top.
Xelon CSI Driver Container Storage Interface driver that integrates Kubernetes PersistentVolumeClaim resources with Xelon persistent storage.
Xelon Cloud Controller Manager (CCM) Cloud-provider integration for Service type=LoadBalancer, node lifecycle management, and routing.
Kubelet Serving Certificate Approver Automatically approves kubelet serving certificates so metrics and log endpoints work out of the box.

What You Manage

You have full administrative access to the cluster via the provided kubeconfig. There are no admission policies or API restrictions imposed by Xelon beyond the pre-installed components. Everything listed below is supported without limitations:

  • Custom Resource Definitions (CRDs) — Install any CRD required by your applications, operators, or platform tools (e.g., cert-manager, Sealed Secrets, Kyverno, Argo CD, Flux).
  • Gateway API — Install the Gateway API CRDs and use Gateway, GatewayClass, HTTPRoute, and GRPCRoute resources along with a compatible implementation (e.g., Cilium Gateway, Istio, Envoy Gateway).
  • Additional ingress controllers — Cilium is configured as the default ingress controller on every XKS cluster (see the table above). You can install additional ingress controllers alongside Cilium — NGINX Ingress, Traefik, HAProxy, Kong, Contour, etc. — if you need protocol or feature support that Cilium does not provide.
  • Service meshes — Install Istio, Linkerd, or Cilium Service Mesh for advanced traffic management, mTLS, and observability.
  • Custom operators and controllers — Run any Kubernetes operator or controller your workloads require.
  • Helm charts — Deploy applications and platform components via Helm.
  • Autoscaling (HPA, VPA, Cluster Autoscaler) — Install and configure Horizontal Pod Autoscaler, Vertical Pod Autoscaler, or the Kubernetes Cluster Autoscaler yourself. These controllers are not part of the managed offering today. The pre-installed Metrics Server provides the metrics API required for HPA/VPA, but the autoscaler controllers themselves must be deployed by the customer.
Standard Kubernetes

XKS clusters on Xelon HQ are conformant upstream Kubernetes. If a workload runs on any other Kubernetes distribution, it will run here. Xelon does not fork Kubernetes or impose custom APIs.

Scaling Worker Nodes

Worker node counts and per-node resources (CPU, RAM, disk) are adjusted manually through the HQ UI or API. Load-based cluster node autoscaling is not provided by the managed service — you can install the Kubernetes Cluster Autoscaler yourself if automated node scaling is required.

Cluster Info and Status

The cluster details page provides an overview of your XKS cluster including:

  • Cluster health status and readiness
  • Talos Linux version and Kubernetes version
  • Control plane and worker node counts
  • API server endpoint
  • Individual node status and resource utilization

Editing Control Planes

You can modify the control plane configuration after cluster creation. From the cluster details page, navigate to the Control Plane section and click Edit. You can adjust:

  • CPU cores and RAM per control plane node
  • Disk size per node
  • Number of control plane nodes (maintain odd numbers for quorum)
Control Plane Changes

Modifying control plane resources triggers a rolling update. Nodes are updated one at a time to maintain cluster availability. Ensure you have at least 3 control plane nodes for high availability during updates.

Managing Worker Nodes

Worker nodes can be scaled and reconfigured as your workload demands change:

  • Scale up: Increase the worker node count to add capacity.
  • Scale down: Decrease the node count to reduce costs. Workloads are automatically rescheduled.
  • Resize: Modify CPU, RAM, or disk per worker node.

Cluster Resources

The resources view shows the aggregate compute capacity of your cluster, including total and available CPU, memory, and storage across all nodes. Use this view to determine whether you need to scale your cluster.

Upgrading Talos and Kubernetes Versions

Xelon HQ supports in-place upgrades for both Talos Linux and Kubernetes versions. From the cluster details page, click Upgrade and select the target versions.

Check Compatibility

Verify that your target Kubernetes version is compatible with the selected Talos version. The upgrade dialog shows only compatible combinations.

Initiate Upgrade

Select the new Talos and/or Kubernetes version and confirm the upgrade. Control plane nodes are upgraded first, followed by worker nodes.

Monitor Progress

Track the upgrade progress in the cluster details view. Each node shows its current version and upgrade status.

Tip

Test version upgrades on a non-production cluster first. Ensure your workloads are compatible with the new Kubernetes version before upgrading production clusters.

Deleting a Cluster

To delete an XKS cluster, navigate to the cluster details page and click Delete Cluster. Confirm the deletion when prompted.

Warning

Deleting a cluster permanently destroys all nodes, workloads, persistent volumes, and associated resources. This action is irreversible.