Ubuntu 26.04 LTS + Proxmox: a practical guide to the 2026 virtualization stack

Ubuntu 26.04 LTS + Proxmox: a practical guide to the 2026 virtualization stack

Ubuntu 26.04 LTS is the latest long-term support release from Canonical, and it arrives at a time when virtualization in mid-sized infrastructures (the space where most Latin American ISPs and IT companies operate) is maturing rapidly. Kernel 7.0, Intel TDX support for confidential computing, and improvements in automation tooling integration make this a release worth adopting thoughtfully.

For those running Proxmox VE as their hypervisor platform (the most widely used open-source alternative in the ISP and mid-scale datacenter segment in LATAM), Ubuntu 26.04 as a guest OS is now the combination with the best support, the best QEMU/KVM integration, and the most automation options.

This guide covers the practical elements of that combination.


What’s new in Ubuntu 26.04 LTS

Kernel 7.0: what changes for infrastructure

Linux kernel 7.0 is a significant evolution from the 6.x line, with changes that matter for virtualization and infrastructure workloads:

Improved scheduler (EEVDF): The EEVDF (Earliest Eligible Virtual Deadline First) scheduler replaces CFS as the default scheduler. In mixed virtualization workloads (high- and low-priority VMs on the same host), EEVDF improves CPU distribution and reduces latency for interactive VMs without sacrificing throughput for batch-processing VMs.

Mature io_uring: io_uring has reached full support for the most common I/O operations, including network operations. For Proxmox, the VM disk I/O interface benefits from the lower latency and reduced CPU cost of io_uring compared to the classic AIO subsystem.

KVM and VFIO improvements: The KVM subsystem receives improvements in EPT (Extended Page Tables) memory management that reduce memory overhead on hosts running many small VMs. VFIO (for hardware passthrough) improves stability and adds PCI-e 5.0 support.

eBPF as a first-class citizen: eBPF capabilities continue to expand, with improvements to the JIT compiler and map interfaces. For Proxmox networking (OVS, VLAN, bridges), eBPF enables monitoring and filtering functions with minimal overhead.

Intel TDX: confidential virtualization in production

Intel TDX (Trust Domain Extensions) is Intel’s implementation of what is known as a Trusted Execution Environment (TEE) for full VMs. The core idea: a TDX VM (“TD” in Intel’s nomenclature) runs in an environment where the hypervisor cannot access the VM’s memory or processor state, even if the hypervisor is compromised.

This solves a problem that doesn’t exist in most small infrastructures, but is highly relevant in three specific contexts:

Client hosting on shared infrastructure: If your company offers VPS or virtual dedicated servers to third parties, TDX lets you guarantee those clients that the hypervisor operator cannot read their memory. It’s the technical equivalent of “not even we can see what runs in your VM.”

Compliance and sensitive data: Workloads with strict compliance requirements (financial data processing, health data, personal data under legislation such as Argentina’s Data Protection Law, Brazil’s LGPD, or GDPR for regional companies with European presence) can benefit from TDX’s cryptographic guarantees.

Confidential multi-tenant computing: In architectures where multiple organizations share infrastructure and don’t trust each other, TDX provides verifiable workload isolation.

Requirements for using TDX:

  • 4th-generation Intel Xeon Scalable (Sapphire Rapids) processor or later with TDX support enabled in BIOS
  • Ubuntu 26.04 as both host and guest (both must support TDX)
  • Proxmox VE 9.x or later with TDX support enabled in QEMU
  • BIOS with TDX enabled (generally requires explicit configuration on Dell, HPE, Supermicro systems)

If your current hardware is an older generation (Ice Lake, Cascade Lake), TDX does not apply. For most ISP infrastructures in LATAM, TDX is a technology to keep on the radar but not an immediate requirement. That changes if you’re in the IaaS business or if you have clients with strict compliance requirements.


Proxmox VE 9.x: the platform

Proxmox VE is the open-source hypervisor based on KVM/QEMU with LXC management and a full web interface. Version 9.x, running on Debian Bookworm (12), is the current stable version and the recommended one for new installations.

Compatibility with Ubuntu 26.04 as a guest: Proxmox VE 9.x includes full support for Ubuntu 26.04 as a VM operating system. Ubuntu 26.04’s kernel 7.0 is compatible with the virtio backend, the e1000e/virtio-net network driver, and the virtio-scsi disk driver that Proxmox uses by default.

VM templates: Proxmox has a template system that lets you clone preconfigured VMs. Ubuntu 26.04 can be configured as a base template to provision new instances in seconds.


Preparing Ubuntu 26.04 as a template in Proxmox

The recommended workflow for having an Ubuntu 26.04 template ready to clone:

Step 1: Download the official cloud image

Canonical publishes cloud images of Ubuntu 26.04 in qcow2 format, optimized for use with KVM/QEMU. These images include cloud-init preinstalled:

# On the Proxmox host, download the official cloud image
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img \
  -O /var/lib/vz/template/ubuntu-26.04-cloud.img

(Replace “noble” with the actual codename for Ubuntu 26.04 when available. Ubuntu cloud images are always at cloud-images.ubuntu.com.)

Step 2: Create the base VM in Proxmox

# Create the VM with ID 9001 (use an ID outside the production VM range)
qm create 9001 \
  --name ubuntu-26.04-template \
  --memory 2048 \
  --cores 2 \
  --net0 virtio,bridge=vmbr0 \
  --ostype l26 \
  --agent enabled=1 \
  --serial0 socket \
  --vga serial0

# Import the cloud image disk to local-lvm storage
qm importdisk 9001 /var/lib/vz/template/ubuntu-26.04-cloud.img local-lvm

# Configure the imported disk as the boot disk
qm set 9001 \
  --scsihw virtio-scsi-pci \
  --scsi0 local-lvm:vm-9001-disk-0,cache=writeback,discard=on

# Configure boot and cloud-init
qm set 9001 \
  --boot c \
  --bootdisk scsi0 \
  --ide2 local-lvm:cloudinit

# Configure cloud-init options
qm set 9001 \
  --ciuser ubuntu \
  --cipassword "" \
  --sshkeys /root/.ssh/authorized_keys \
  --ipconfig0 ip=dhcp

# Enable QEMU guest agent
qm set 9001 --agent enabled=1

# Convert to template
qm template 9001

Step 3: Clone the template for new instances

# Clone the template with a new ID
qm clone 9001 101 --name nueva-vm --full

# Adjust clone resources if needed
qm set 101 --memory 4096 --cores 4

# Resize the disk
qm resize 101 scsi0 +20G

# Start the VM
qm start 101

Automation with NoCloud templates

cloud-init is the de facto standard for initial VM configuration in cloud and on-premise environments. Ubuntu 26.04 includes cloud-init preinstalled in its cloud images. The NoCloud datasource is what Proxmox uses when configuring cloud-init options through the web interface or the API.

NoCloud works by presenting the VM with an ISO image (a virtual disk) containing two files:

  • user-data: user configuration, packages, commands to run
  • meta-data: instance information (hostname, ID)

Proxmox automatically generates this ISO when you configure the cloud-init options on the VM. But for more advanced automation, you can create your own user-data:

Example user-data for an ISP networking server

#cloud-config
hostname: noc-server-01
fqdn: noc-server-01.ejemplo.isp

users:
  - name: admin
    groups: sudo
    shell: /bin/bash
    ssh_authorized_keys:
      - ssh-ed25519 AAAA... [email protected]
    sudo: ALL=(ALL) NOPASSWD:ALL

package_update: true
package_upgrade: true
packages:
  - qemu-guest-agent
  - git
  - vim
  - htop
  - net-tools
  - tcpdump
  - mtr-tiny
  - iperf3
  - prometheus-node-exporter
  - zabbix-agent2

runcmd:
  - systemctl enable qemu-guest-agent
  - systemctl start qemu-guest-agent
  - systemctl enable prometheus-node-exporter
  - systemctl start prometheus-node-exporter
  - timedatectl set-timezone America/Argentina/Buenos_Aires

write_files:
  - path: /etc/zabbix/zabbix_agent2.conf.d/isp-noc.conf
    content: |
      Server=10.0.0.100
      ServerActive=10.0.0.100
      Hostname=noc-server-01

final_message: "VM lista: $UPTIME segundos desde boot"

cloud-init snippets in Proxmox

Proxmox allows you to define cloud-init “snippets” that can be reused. To use them:

# On the Proxmox host, save the user-data to the snippets directory
cat > /var/lib/vz/snippets/base-noc-server.yaml << 'EOF'
#cloud-config
# ... user-data content ...
EOF

# When cloning the VM, specify the snippet
qm set 101 --cicustom "user=local:snippets/base-noc-server.yaml"

With this workflow, provisioning a new server from template to having the monitoring agent and all networking tools installed takes less than five minutes and requires no manual intervention.


Integration with Ansible for management at scale

For environments with multiple Proxmox hosts (clusters) and dozens or hundreds of VMs, manual template and clone management doesn’t scale. The natural combination is Proxmox + cloud-init + Ansible:

# playbook: provision_vm.yml
- name: Clonar y configurar nueva VM
  hosts: proxmox_host
  tasks:
    - name: Clonar template
      community.general.proxmox_kvm:
        api_host: proxmox.ejemplo.isp
        api_user: root@pam
        api_token_id: ansible
        api_token_secret: ""
        clone: ubuntu-26.04-template
        name: ""
        newid: ""
        full: true
        node: pve01
        storage: local-lvm

    - name: Configurar recursos
      community.general.proxmox_kvm:
        api_host: proxmox.ejemplo.isp
        api_user: root@pam
        api_token_id: ansible
        api_token_secret: ""
        vmid: ""
        memory: ""
        cores: ""
        ipconfig:
          ipconfig0: "ip=/24,gw="
        ciuser: admin
        sshkeys: ""
        update: true

    - name: Iniciar VM
      community.general.proxmox_kvm:
        api_host: proxmox.ejemplo.isp
        api_user: root@pam
        api_token_id: ansible
        api_token_secret: ""
        vmid: ""
        state: started

The community.general.proxmox_kvm Ansible module lets you manage the full VM lifecycle: creation, configuration, start, stop, and deletion.


Hardening Ubuntu 26.04 for production environments

Ubuntu 26.04 includes several security improvements by default, but for production it’s worth applying additional configuration:

AppArmor active by default

Ubuntu keeps AppArmor (Linux Security Modules) active with profiles for the most common services. Verify that the relevant profiles are in enforce mode:

# Check profile status
apparmor_status

# Check profiles in complain mode (restrictions not enforced)
apparmor_status --complaining

For critical services without an existing AppArmor profile, you can generate one with aa-genprof.

SSH hardening

# /etc/ssh/sshd_config.d/hardening.conf
PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
MaxAuthTries 3
LoginGraceTime 30
ClientAliveInterval 300
ClientAliveCountMax 2
AllowUsers admin

Firewall with nftables

Ubuntu 26.04 uses nftables as the default backend for ufw. For infrastructure servers, configure minimal rules:

# Enable ufw
ufw default deny incoming
ufw default allow outgoing

# Allow SSH only from the management network
ufw allow from 10.0.0.0/24 to any port 22 proto tcp

# Allow monitoring (Zabbix Agent, Prometheus)
ufw allow from 10.0.0.100 to any port 10050 proto tcp
ufw allow from 10.0.0.101 to any port 9100 proto tcp

ufw enable

Automatic security updates

# Install unattended-upgrades
apt install unattended-upgrades

# Configure to apply security updates automatically
dpkg-reconfigure -pmedium unattended-upgrades

For critical production servers, automatic updates can be risky without prior testing. The alternative is to configure unattended-upgrades only for security patches (not version upgrades) and maintain a monthly manual review process.


Migrating from Proxmox on Debian Bookworm

If you have an existing Proxmox cluster running on Debian Bookworm (the base for Proxmox VE 8.x), migrating VMs to the new stack involves two separate dimensions:

Proxmox host migration (not immediately required): The Proxmox host can stay on Debian Bookworm without needing to migrate to Ubuntu 26.04 as the hypervisor OS. Proxmox as a product runs on Debian, not Ubuntu. Ubuntu 26.04 applies as the guest VM operating system, not the host.

VM template upgrade: If you have Ubuntu 22.04 LTS or 24.04 LTS templates in production, the recommended strategy is:

  1. Create a clean new Ubuntu 26.04 template in parallel.
  2. For new VMs, use the 26.04 template.
  3. For existing critical VMs, evaluate in-place upgrade vs. reprovisioning on a case-by-case basis.

The in-place upgrade from Ubuntu 24.04 to 26.04 is possible with do-release-upgrade, but for production VMs with complex configurations, reprovisioning with the new template and restoring data/configuration tends to be more predictable.


What we recommend in practice

At Ayuda.LA we work with infrastructure teams managing Proxmox clusters with anywhere from 20 to 200 VMs. What has the greatest impact on operational efficiency isn’t the specific kernel or guest OS version: it’s the consistency of the provisioning process.

A well-built template with cloud-init, with the Zabbix agent installed and configured from first boot, with SSH keys properly distributed, and with basic firewall policies in place, is worth more than a version upgrade without a documented process.

Ubuntu 26.04 LTS is a good excuse to review and formalize that process if it hasn’t been done yet. LTS releases have five years of support (and ten with Extended Security Maintenance), making them the right foundation for production infrastructure: you won’t be upgrading the OS on all your VMs every six months.

Learn more about our virtualization and infrastructure automation services.


Want to modernize your virtualization stack?

We can evaluate your current Proxmox setup, design a provisioning process with cloud-init and Ansible, and guide you through the migration to Ubuntu 26.04 LTS as the base for your production VMs.

Let’s talk about your infrastructure →


Have questions about Proxmox, Ubuntu, or VM automation? Write to us at [email protected] — we reply to every message.