Object Storage (S3)

S3-compatible object storage for unstructured data, backups, media assets, and application integration.

Overview

Xelon HQ Object Storage provides an S3-compatible API for storing and retrieving unstructured data. It is ideal for static assets, backups, logs, and any data that benefits from HTTP-based access. Objects are organized into buckets and accessed via standard S3 tooling and SDKs.

S3 compatibility

Xelon Object Storage is compatible with the AWS S3 API. You can use the AWS CLI, Terraform S3 backend, or any S3-compatible SDK to interact with your storage.

Creating an S3 User

Before you can create buckets or upload objects, you need an S3 user associated with your organization.

Navigate to Object Storage

Open Virtual Datacenter > S3 Object Storage in the sidebar.

Create an S3 user

Click Create S3 User. Enter an S3 user name (e.g., app-prod), select the Owner tenant (if applicable), choose a Region (single region or replicated — see below), and select a Storage Quota from the available plan cards. An initial access key pair is generated automatically.

Region Selection

Each S3 user is bound to a specific region group at creation time:

  • Single region — Data stays in one location.
  • Replicated region — Data is automatically copied to a second region. Replicated regions cost exactly twice as much per GB as single regions.

Storage Quota Plans

The Storage Quota selector shows available plans as cards. Plans are managed centrally and may include any of the following sizes:

Plan size Notes
100 GBEntry tier for small workloads.
250 GB
500 GB
1 TB
2 TB
5 TBCommon starting point for backup workloads.
10 TB
15 TB – 50 TBLarger tiers (15, 20, 30, 40, 50 TB) for backup repositories and data lakes.

Each plan card shows the monthly price in CHF. Replicated regions show the doubled price. Quota applies per S3 user, uploads are blocked once the quota is reached, and billing is settled at the end of each month.

Changing the plan after creation

Quotas can be changed after creation by editing the S3 user. The plan picker shows ↑ Upgrade or ↓ Downgrade next to each card depending on the change. Downgrades are only allowed when the new plan still fits the user's current usage. The currently active plan is marked with a Current Plan badge.

Generating Access Keys

Each S3 user requires access keys (an access key and a secret key) to authenticate API requests.

Open the key management dialog

From the Access Management tab, click the key icon on the S3 user to open Manage S3 Access Keys.

Create a new key

Click Create New Key and confirm with your password. A new access key pair will be created and displayed.

Copy and store securely

Copy the Access key and Secret key immediately using the copy buttons. The secret key is only shown once and cannot be retrieved later.

Store your secret key safely

The secret access key is displayed only at creation time. If lost, you must delete the token and generate a new one.

Managing Access Keys

You can manage existing keys from the Manage S3 Access Keys dialog:

  • View keys: All active access keys are listed with their creation date.
  • Delete a key: Click Delete next to any key and confirm with your password to revoke it. Applications using that key will immediately lose access.
  • Rotate keys: Create a new key before deleting the old one to avoid downtime during key rotation.
S3 user list

The S3 user list on the Access Management tab is paginated (default 5 rows per page; selectable to 10 or 25). Your page-size preference is remembered in browser local storage.

Creating a Bucket

Open bucket management

From the S3 user detail page, navigate to the Buckets tab.

Create a bucket

Click Create Bucket and enter a unique bucket name. Bucket names must be globally unique within the storage cluster, lowercase, and between 3-63 characters.

Configure versioning and object lock

When creating a bucket, two additional options are available:

  • Enable Versioning — Enabled by default. Keeps multiple versions of objects in the bucket, protecting against accidental overwrites and deletions.
  • Enable Object Lock — When enabled, an optional Retention period (days) input appears. If a value is entered, every new object version becomes immutable for that number of days under COMPLIANCE mode (cannot be overwritten or deleted until the period elapses). Leave the field empty to enable Object Lock without a default retention period — this configuration is required by Veeam Backup & Replication and some other backup tools.
Object Lock is permanent

Whether Object Lock is enabled or disabled is decided at bucket creation time and cannot be changed afterwards. Versioning, by contrast, can be toggled on or off at any time from the bucket list — unless Object Lock is active, in which case versioning is permanently enabled and the toggle is locked.

Bucket List Columns

The bucket list table includes an Object Lock column (showing Enabled or Disabled) and a Versioning column with a dropdown toggle that allows you to enable or disable versioning on existing buckets. The Versioning toggle is disabled when Object Lock is on (versioning is permanently enabled in that case).

Bucket naming rules

Bucket names must be lowercase, start with a letter or number, and can contain hyphens. They cannot contain underscores, periods, or uppercase characters.

IP Restrictions per Bucket

Restrict S3 access to a bucket so that only requests from approved IP addresses or CIDR ranges are allowed. All other requests are denied at the storage layer.

Open IP Restrictions for the bucket

From the Buckets tab, find the target bucket and open its action menu. Click IP Restrictions.

Enable restrictions

Toggle Enable IP Restrictions on. While enabled, all requests from IPs not on the allowlist are denied — ensure every required IP is added before saving.

Add allowed IPs or CIDR ranges

Enter values into Allowed IPs / CIDRs using the placeholder format (e.g., 195.234.68.206 or 192.168.1.0/24 or 2001:db8::/32). The following formats are accepted:

  • Single IPv4 addresses (automatically expanded to /32 CIDR notation).
  • IPv4 CIDR ranges, e.g., 10.0.0.0/8.
  • IPv6 CIDR ranges, e.g., 2001:db8::/32.

Use the + Add my IP button to insert your current public IP automatically.

Save

Click save. The restriction takes effect immediately. Disabling the toggle and saving removes all restrictions; the bucket becomes accessible from any IP again.

File Manager and IP restrictions

The built-in File Manager is automatically disabled for buckets that have IP restrictions enabled. Because File Manager requests are proxied by Xelon HQ infrastructure, they would arrive at the S3 endpoint from HQ's IP — which is not on your allowlist. To use the File Manager again, temporarily disable IP restrictions on the bucket.

Validation

At least one IP or CIDR must be present when restrictions are enabled. Duplicate entries are rejected. The restriction list is per S3 user + bucket combination.

Browsing Buckets with the File Manager

The File Manager is a built-in web UI for working with bucket contents directly in the browser — no S3 client, AWS CLI, or third-party tool required. Use it for ad-hoc uploads, downloads, folder organization, or quickly verifying that an automation pipeline produced the expected files.

Opening the File Manager

  1. Navigate to Virtual Datacenter > S3 Object Storage and open the Buckets tab.
  2. Locate the bucket and open its action menu.
  3. Click File Manager. The session is created on the fly and the browser opens directly inside the bucket root.

The modal title shows the bucket name ("File Manager — my-bucket"). Sessions are tied to the S3 user that owns the bucket.

Available Operations

Action What it does
Browse Click any folder in the file list to enter it. Use the breadcrumb at the top to jump back to a parent folder or the bucket root.
Upload Click Upload in the toolbar. The standard file picker opens; multi-select is supported. Files are uploaded to the currently open folder.
Download Click the download icon at the right end of any file row. The file is downloaded with its original name.
New Folder Click New Folder, enter a name, and press Enter or click Create. The folder is created in the currently open path.
Copy / Cut + Paste Select one or more items, click Copy or Cut, navigate to the destination folder, and click Paste. Cut items are dimmed in the source location until pasted. Use Clear clipboard to abort. Copy and move stay within the same bucket.
Delete Select one or more items and click Delete. A confirmation dialog asks for verification before the deletion runs.
Multi-select Use the row checkboxes or the header checkbox to select all visible items. The footer shows a live count of selected items and the clipboard state.

Limits and Notes

  • Same bucket only. Copy and move operations stay inside the current bucket. Cross-bucket copies require an S3 client (AWS CLI, mc, or your own integration).
  • One bucket per session. The session token is bound to the S3 user that owns the bucket. To browse a bucket owned by a different S3 user, open File Manager from that bucket's action menu.
  • IP restrictions block File Manager. Because requests are proxied by Xelon HQ, a bucket with active IP restrictions cannot be browsed via File Manager. The action item is disabled with a hint message in that case.
  • Object Lock is respected. Files protected by an active retention period cannot be deleted; the operation will fail with the underlying S3 error.
  • Browser-based uploads. Files travel through your browser. For large files (multi-GB), using aws s3 cp or mc is faster and more reliable than the browser uploader.

Deleting a Bucket

Bucket must be empty

A bucket must be completely empty before it can be deleted. Remove all objects and incomplete multipart uploads first.

To delete a bucket, select it from the bucket list and click Delete. Confirm the deletion when prompted.

Connecting with S3 Tools

Configure the AWS CLI to work with Xelon Object Storage by setting up a named profile. Use the zone of your S3 user as the region (e.g., zh1 for Zürich, ch1 for Aargau/Lupfig):

aws configure --profile xelon
# AWS Access Key ID: <your-access-key-id>
# AWS Secret Access Key: <your-secret-access-key>
# Default region name: zh1
# Default output format: json

Then use the --endpoint-url flag to direct requests to the Xelon S3 endpoint:

# List buckets
aws --profile xelon --endpoint-url https://<s3-endpoint> s3 ls

# Upload a file
aws --profile xelon --endpoint-url https://<s3-endpoint> s3 cp ./backup.tar.gz s3://my-bucket/

# Download a file
aws --profile xelon --endpoint-url https://<s3-endpoint> s3 cp s3://my-bucket/backup.tar.gz ./

# Sync a directory
aws --profile xelon --endpoint-url https://<s3-endpoint> s3 sync ./data/ s3://my-bucket/data/
Endpoint URL and region

The S3 endpoint URL is shown on your S3 user details page; it varies by cloud location and replication setting. The region in your AWS CLI profile should match the zone of your S3 user (zh1 for Zürich, ch1 for Aargau/Lupfig). Other S3 clients (e.g., mc, Veeam, Rclone) ask for the same two values.

Encryption at Rest

Xelon Object Storage does not currently apply server-side encryption (SSE) automatically. Buckets created through Xelon HQ do not enable SSE-S3, SSE-KMS, or SSE-C at the object-storage layer — objects are stored on the underlying Ceph cluster as you upload them.

If you need a hard guarantee that data is encrypted at rest with keys you control, use client-side encryption: encrypt before uploading. The key never leaves your environment, which is the strongest sovereignty model and works without any platform-side feature.

Roadmap

Server-side encryption with customer-managed keys (SSE-KMS) is planned for a future release. Once available, you will be able to enable per-bucket encryption with a key from a managed Xelon Key Vault — no client-side changes required. Contact your account manager for current timelines.

Example 1: One-off file encryption with OpenSSL

OpenSSL is preinstalled on virtually every Linux and macOS system — useful for ad-hoc encrypt-then-upload of single files or archives:

# Encrypt a file with AES-256-CBC and a password-derived key
openssl enc -aes-256-cbc -salt -pbkdf2 \
  -in mydata.tar -out mydata.tar.enc

# Upload the encrypted file
aws --profile xelon --endpoint-url https://<s3-endpoint> \
    s3 cp mydata.tar.enc s3://my-bucket/

# Later: download and decrypt
aws --profile xelon --endpoint-url https://<s3-endpoint> \
    s3 cp s3://my-bucket/mydata.tar.enc ./
openssl enc -d -aes-256-cbc -pbkdf2 \
  -in mydata.tar.enc -out mydata.tar

OpenSSL prompts for a passphrase at encrypt and decrypt time. Store the passphrase securely — without it the data is unrecoverable.

Example 2: Stronger key management with GPG

GnuPG (gpg) provides proper public/private key pairs and integrates with hardware tokens such as YubiKey or smartcards. It is the right choice when multiple people or services need to read the same backups, or when you want hardware-backed key storage:

# One-time: generate a keypair if you don't have one
gpg --full-generate-key

# Encrypt for one or more recipients
gpg --encrypt --recipient your-key-id mydata.tar
# produces mydata.tar.gpg

# Upload
aws --profile xelon --endpoint-url https://<s3-endpoint> \
    s3 cp mydata.tar.gpg s3://my-bucket/

# Decrypt later (private key is required)
gpg --decrypt mydata.tar.gpg > mydata.tar

You can encrypt for multiple recipients in the same operation (each holding their own private key), which is convenient for shared backups within a team.

Example 3: Transparent encryption for sync and backup with rclone

For continuous backup or sync workloads, manual per-file encryption is impractical. rclone provides a crypt backend that wraps any S3-compatible remote and encrypts both file contents and names transparently:

# 1. Configure your Xelon S3 endpoint as a remote (e.g., named "xelon")
rclone config
#   Storage type: s3
#   Provider: Other
#   Access key + Secret key: your S3 keys
#   Endpoint: https://<s3-endpoint>
#   Region: zh1 (or ch1 — match your S3 user's zone)

# 2. Create a "crypt" remote that wraps the S3 remote
rclone config
#   Name: xelon-crypt
#   Storage: crypt
#   Remote: xelon:my-bucket/encrypted/
#   Filename encryption: standard
#   Directory name encryption: true
#   Password + salt: choose strong values — store them safely

# 3. From now on, use xelon-crypt: as if it were a normal remote
rclone copy /local/data xelon-crypt:
rclone sync /local/data xelon-crypt:

Files appear in the underlying bucket with scrambled names and AES-encrypted content. The crypt backend is widely used in production backup pipelines and is well audited.

Veeam Backup & Replication

Veeam customers do not need any of the above — Veeam encrypts backup files natively. Enable it at job creation time under Storage → Advanced → Storage → Enable backup file encryption, supply a strong passphrase, and Veeam encrypts every block before it leaves your network. The passphrase never reaches Xelon. Combined with Object Lock on the destination bucket, this gives you immutable, customer-encrypted backup repositories that are regulator-defensible today.

Best Practices

  • Use separate S3 users per application to isolate access and simplify key rotation.
  • Rotate access keys regularly by generating a new token, updating your applications, then deleting the old token.
  • Use meaningful bucket names that reflect the project and environment (e.g., myapp-prod-assets).
  • Enable versioning for buckets containing critical data to protect against accidental overwrites.
  • Set lifecycle policies for temporary data such as logs to automatically expire old objects.
  • Avoid storing secrets in object storage. Use a dedicated secrets manager instead.