Diskussion:PVE

Aus C3D2
Zur Navigation springen Zur Suche springen

anbinden von CephFS

ceph-create-keys

Proxmox VE 9 + LMDE 7 (Cinnamon) on ZFS RAID1

Overview

This guide documents installing the Cinnamon desktop environment (LMDE 7 "Gigi") on top of Proxmox VE 9, with the root filesystem on ZFS RAID1. It extends the official Proxmox forum tutorial with all ZFS-specific fixes required to make it work.

Base: Proxmox VE 9 installed via ISO on ZFS RAID1 (two NVMe drives) Desktop: LMDE 7 "Gigi" (Debian 13 Trixie-based) with Cinnamon Bootloader: proxmox-boot-tool (EFI, one ESP per drive)


Encryption: ZFS Native Encryption of rpool

IMPORTANT: The Proxmox VE installer does NOT support ZFS encryption. There is no encryption option in the installer GUI. Encryption must be applied post-install by migrating datasets.

ZFS native encryption is marked EXPERIMENTAL in Proxmox with known issues:

  • Replication with encrypted datasets may fail
  • Checksum errors with some snapshot/ZVOL operations

The procedure below encrypts rpool/ROOT (the OS) with a passphrase entered at boot, and rpool/data + rpool/var-lib-vz with a random keyfile stored on the encrypted root (auto-unlocked).

The EFI partitions remain unencrypted — GRUB reads kernels from the ESP before the passphrase is entered. Only the ZFS data partitions are encrypted.

Prerequisites

  • Proxmox VE 9 ISO on USB (for the debug shell)
  • zfs-initramfs must be installed on the system (verify: dpkg -l zfs-initramfs)
  • Know your dataset layout: zfs list -o name,encryption,mountpoint

Phase 1: Encrypt rpool/ROOT (from Proxmox ISO debug mode)

Boot Proxmox ISO → Advanced Options → Graphical, debug mode → press Ctrl+D until bash shell.

 # Import the pool
 zpool import -f rpool
 # Snapshot ROOT and all children
 zfs snapshot -r rpool/ROOT@copy
 # Copy ROOT to a temporary location
 zfs send -R rpool/ROOT@copy | zfs receive rpool/copyroot
 # Destroy the original unencrypted ROOT
 zfs destroy -r rpool/ROOT
 # Recreate ROOT as an encrypted dataset — you will set the passphrase here
 zfs create -o encryption=aes-256-gcm -o keyformat=passphrase rpool/ROOT
 # Restore pve-1 into the new encrypted ROOT
 zfs send -R rpool/copyroot/pve-1@copy | zfs receive -o encryption=on rpool/ROOT/pve-1
 # Clean up the temporary copy
 zfs destroy -r rpool/copyroot
 # Export the pool
 zpool export rpool

Phase 2: Chroot and configure initramfs

 # Re-import with altroot, load key (rpool/ROOT is the encryption root, not pve-1)
 zpool import -f -R /mnt rpool
 zfs load-key rpool/ROOT
 zfs mount rpool/ROOT/pve-1
 # Create mountpoints and chroot
 mkdir -p /mnt/proc /mnt/sys /mnt/dev
 mount -o rbind /proc /mnt/proc
 mount -o rbind /sys /mnt/sys
 mount -o rbind /dev /mnt/dev
 chroot /mnt /bin/bash

Inside the chroot:

 # Generate 256-bit random keyfile for data datasets
 dd if=/dev/urandom bs=32 count=1 of=/.data.key
 chmod 400 /.data.key
 chattr +i /.data.key
 # Systemd service to auto-load all ZFS keys at boot
 cat > /etc/systemd/system/zfs-load-key.service << 'EOF'
 [Unit]
 Description=Load ZFS encryption keys
 DefaultDependencies=no
 After=zfs-import.target
 Before=zfs-mount.service
 [Service]
 Type=oneshot
 RemainAfterExit=yes
 ExecStart=/usr/sbin/zfs load-key -a
 [Install]
 WantedBy=zfs-mount.service
 EOF
 systemctl enable zfs-load-key
 # Fix for boot screen getting stuck at encryption passphrase prompt
 echo "simplefb" >> /etc/initramfs-tools/modules
 # Configure initramfs to force-import pool (handles unclean shutdown state)
 echo 'ZFS_INITRD_ADDITIONAL_ARGS="-f"' >> /etc/default/zfs
 # Rebuild initramfs and sync to both ESPs
 update-initramfs -k all -u
 proxmox-boot-tool refresh
 exit

Phase 3: Clean export and reboot

If the pool cannot be exported cleanly due to /mnt/var/lib/vz being stuck mounted, use the -N (no mount) trick to clear the "in use" flag without mounting any datasets:

 # Lazy unmount bind mounts
 umount -l /mnt/proc /mnt/sys /mnt/dev
 # Try normal export first
 zpool export -f rpool
 # If export fails due to stuck mounts, use this instead:
 zpool import -f -N rpool
 zpool export rpool
 reboot

First boot

The initramfs will prompt for the passphrase to unlock rpool/ROOT. After entering it, the zfs-load-key systemd service auto-unlocks rpool/data and rpool/var-lib-vz using /.data.key on the decrypted root.

Verify encryption after login:

 zfs get encryption rpool/ROOT/pve-1 rpool/data rpool/var-lib-vz

Phase 4: Migrate data datasets to encrypted (from running system)

rpool/data and rpool/var-lib-vz cannot be destroyed from the ISO session due to mount conflicts with the altroot. Do this from the running system after the first successful boot (these datasets are empty at this stage — no VMs or containers created yet).

 # rpool/data
 zfs unmount rpool/data
 zfs destroy rpool/data
 zfs create -o encryption=aes-256-gcm \
   -o keyformat=raw -o keylocation=file:///.data.key \
   rpool/data
 # rpool/var-lib-vz
 systemctl stop pve-guests
 zfs unmount rpool/var-lib-vz
 zfs destroy rpool/var-lib-vz
 zfs create -o encryption=aes-256-gcm \
   -o keyformat=raw -o keylocation=file:///.data.key \
   rpool/var-lib-vz
 systemctl start pve-guests
 # Verify all datasets encrypted
 zfs get encryption rpool/ROOT/pve-1 rpool/data rpool/var-lib-vz

Known Issues and Fixes

--- "keys must be loaded for encryption root of rpool/ROOT/pve-1 (rpool/ROOT)" --- rpool/ROOT is the encryption root, not pve-1. Load the key on the parent:

 zfs load-key rpool/ROOT

--- Pool drops to BusyBox ash initramfs with no zpool command --- The pool has an "in use" flag from an unclean previous session. Boot the ISO, force-import without mounting, export cleanly, then reboot:

 zpool import -f -N rpool
 zpool export rpool
 reboot

--- /mnt/var/lib/vz cannot be unmounted during ISO session --- Do not attempt to destroy rpool/var-lib-vz from the ISO session. Skip it and migrate from the running system after first boot (Phase 4 above).

Option B: LUKS beneath ZFS (alternative, not used here)

Encrypt block devices with LUKS first, ZFS on top. Well-proven, works with GRUB. Requires external drive for backup and full pool destroy/recreate. Not documented in detail here as ZFS native encryption was chosen instead.

Create an encrypted dataset for VM/container storage:

 zfs create \
   -o encryption=aes-256-gcm \
   -o keyformat=passphrase \
   -o keylocation=prompt \
   rpool/encrypted
 # Load key on each boot (add to /etc/systemd/system/zfs-load-key.service):
 [Unit]
 Description=Load ZFS encryption keys
 DefaultDependencies=no
 After=zfs-import.target
 Before=zfs-mount.service
 [Service]
 Type=oneshot
 RemainAfterExit=yes
 ExecStart=/usr/sbin/zfs load-key rpool/encrypted
 [Install]
 WantedBy=zfs-mount.service
 systemctl enable zfs-load-key.service

Option C: Accept current state, encrypt only new data pools

If adding a second pool later (e.g. for VM storage on additional drives), create it encrypted from the start with Option B syntax above. The existing rpool stays unencrypted.


Prerequisites

  • Proxmox VE 9 already installed on ZFS RAID1 via ISO
  • SSH access as root
  • No subscription key required (uses pve-no-subscription repo)

Step 1: Repository Configuration

Remove the existing Proxmox enterprise source files and replace with LMDE 7 + no-subscription repos.

 rm /etc/apt/sources.list.d/ceph.sources \
    /etc/apt/sources.list.d/debian.sources \
    /etc/apt/sources.list.d/pve-enterprise.sources

Create the main repo file:

 cat > /etc/apt/sources.list.d/official-package-repositories.list << 'EOF'
 ##Source LMDE7##
 deb http://packages.linuxmint.com gigi main upstream import backport
 deb https://deb.debian.org/debian trixie main contrib non-free non-free-firmware
 deb https://deb.debian.org/debian trixie-updates main contrib non-free non-free-firmware
 deb http://security.debian.org trixie-security main contrib non-free non-free-firmware
 deb https://deb.debian.org/debian trixie-backports main contrib non-free non-free-firmware
 EOF

Create the PVE no-subscription repo file:

 cat > /etc/apt/sources.list.d/additional-repositories.list << 'EOF'
 # PVE pve-no-subscription repository
 deb http://download.proxmox.com/debian/pve trixie pve-no-subscription
 EOF

Step 2: APT Preferences

Pin Linux Mint packages to avoid conflicts with Debian packages:

 cat > /etc/apt/preferences.d/official-package-repositories.pref << 'EOF'
 Package: *
 Pin: release o=linuxmint,c=upstream
 Pin-Priority: 700
 EOF
 cat > /etc/apt/preferences.d/official-extra-repositories.pref << 'EOF'
 Package: *
 Pin: origin "build.linuxmint.com"
 Pin-Priority: 700
 EOF

Step 3: Install Keys and Update

 apt update -oAcquire::AllowInsecureRepositories=true
 apt install --reinstall -y --allow-unauthenticated linuxmint-keyring
 apt update

Step 4: ZFS Fix — GRUB Font

The Mint theme references a GRUB font that must exist on both EFI System Partitions (ESPs). On ZFS systems, proxmox-boot-sync runs in a private mount namespace where each ESP is mounted as /boot. The font must therefore be on the ESPs, not just on ZFS.

Install the Ubuntu font package and generate the PF2 grub font:

 apt install -y fonts-ubuntu
 mkdir -p /boot/grub/fonts
 grub-mkfont -s 16 \
   -o /boot/grub/fonts/UbuntuMono16.pf2 \
   /usr/share/fonts/truetype/ubuntu/UbuntuMono-R.ttf

Find the UUIDs of both ESPs and copy fonts + theme to each:

 ls /dev/disk/by-uuid/   # identify the two ESP UUIDs (short UUIDs like 5109-XXXX)
 for UUID in <ESP-UUID-1> <ESP-UUID-2>; do
   mount /dev/disk/by-uuid/$UUID /mnt
   mkdir -p /mnt/grub/fonts /mnt/grub/themes
   cp /boot/grub/fonts/UbuntuMono16.pf2 /mnt/grub/fonts/
   cp -r /boot/grub/themes/linuxmint /mnt/grub/themes/
   umount /mnt
 done

Step 5: ZFS Fix — Remove Stale grub.cfg from ZFS Root

A second kernel hook (zz-update-grub from grub2-common) runs update-grub if /boot/grub/grub.cfg exists. On ZFS, this fails with "Unknown filesystem". On a proxmox-boot-tool system the real grub.cfg lives on the ESPs, so the one on ZFS is redundant and can be removed:

 mv /boot/grub/grub.cfg /boot/grub/grub.cfg.bak
 mv /boot/grub/grub.cfg.new /boot/grub/grub.cfg.new.bak

NOTE: Do not restore these files. proxmox-boot-tool manages grub.cfg on the ESPs. The files on ZFS are never read by the bootloader.


Step 6: Protect the PVE Kernel

Prevent apt from removing the Proxmox kernel meta-package when resolving Mint/Debian package dependencies:

 apt-mark hold proxmox-default-kernel

NOTE: Hold the meta-package only, not versioned kernel packages. New PVE kernel versions install as separate packages (e.g. proxmox-kernel-6.18.x-y-pve) and are unaffected by this hold. Kernel updates still flow automatically.

After each kernel update, clean up old kernels with:

 proxmox-boot-tool clean

Step 7: ZFS Checkpoint (Safety Net)

Before large apt operations, create a ZFS checkpoint as a recovery point:

 zpool checkpoint rpool

If something goes wrong, boot from a live USB and roll back:

 zpool checkpoint --rewind rpool

After confirming the system is healthy, discard the checkpoint:

 zpool checkpoint --discard rpool

Step 8: Install Cinnamon Desktop

 apt install --install-recommends -y debian-system-adjustments && apt update && \
 apt install --install-recommends -y mint-meta-cinnamon cinnamon cinnamon-l10n \
   cinnamon-desktop-data mint-common lightdm-settings mintlocale mintmenu \
   mintsystem mintupdate mintwelcome mint-mirrors mint-translations mint-themes \
   mintinstall timeshift xreader boot-repair synaptic numlockx \
   software-properties-common software-properties-gtk slick-greeter \
   gnome-terminal gnome-font-viewer gnome-calendar gnome-calculator \
   gnome-menus gnome-power-manager firefox build-essential fuse3 cups \
   acpi-support acpid alsa-utils && \
 apt install -y cinnamon-desktop-environment mintdesktop

Step 9: Dist-Upgrade

 apt dist-upgrade -y

If kernel package configuration fails during this step, it is likely the grub font or grub.cfg issue from Steps 4-5. Resolve those first, then run: dpkg --configure -a


Step 10: Disable Unnecessary PVE Services

The following services are only relevant for multi-node clusters or remote SPICE clients and can be safely disabled on a single-node workstation:

 systemctl disable --now pve-ha-crm pve-ha-lrm spiceproxy
 pve-ha-crm   - HA cluster resource manager — requires multiple nodes
 pve-ha-lrm   - HA local resource manager — requires multiple nodes
 spiceproxy   - SPICE client proxy — web UI uses noVNC instead

Services kept active (needed for local VM/container use):

 pvedaemon    - PVE API and web UI (port 8006)
 pveproxy     - PVE web UI proxy
 pvestatd     - VM/CT status monitoring
 pvescheduler - backup scheduling
 pve-cluster  - config storage (required even on single node)
 pve-guests   - autostart VMs/containers on boot
 lxc*/lxcfs   - LXC container support

Step 11: Reboot

 reboot

After reboot, verify:

 uname -r                  # should show pve kernel
 zpool status rpool        # should show ONLINE, no errors
 dpkg --audit              # should return nothing
 proxmox-boot-tool status  # both ESPs should show current kernel

Step 12: Post-Reboot Cleanup

Upgrade ZFS pool features (one-way — older ZFS versions will not be able to read the pool afterward, which is fine on PVE9):

 zpool upgrade rpool

Optional: Firmware

AMD GPU

 wget http://ftp.debian.org/debian/pool/non-free-firmware/f/firmware-nonfree/firmware-amd-graphics_20250808-1_all.deb
 dpkg -x firmware-amd-graphics_20250808-1_all.deb firmware-amd-graphics
 cp -r firmware-amd-graphics/lib/firmware/* /lib/firmware/
 update-initramfs -k all -u

Intel WiFi/Bluetooth

 wget http://ftp.debian.org/debian/pool/non-free-firmware/f/firmware-nonfree/firmware-iwlwifi_20250808-1~bpo13+1_all.deb
 dpkg -x firmware-iwlwifi_20250808-1~bpo13+1_all.deb firmware-iwlwifi
 cp -r firmware-iwlwifi/lib/firmware/* /lib/firmware/
 update-initramfs -k all -u

Realtek WiFi/Bluetooth

 wget http://ftp.debian.org/debian/pool/non-free-firmware/f/firmware-nonfree/firmware-realtek_20250808-1~bpo13+1_all.deb
 dpkg -x firmware-realtek_20250808-1~bpo13+1_all.deb firmware-realtek
 cp -r firmware-realtek/lib/firmware/* /lib/firmware/
 update-initramfs -k all -u

Ongoing Maintenance

Kernel Updates

Kernel updates are fully automatic. proxmox-boot-tool syncs new kernels to both ESPs on install. After each update verify with:

 proxmox-boot-tool status

Clean up old kernels:

 proxmox-boot-tool clean

Before Major apt Operations

Always create a ZFS checkpoint before large upgrades:

 zpool checkpoint rpool
 # run upgrade
 zpool checkpoint --discard rpool   # after confirming success

ZFS Pool Health

 zpool status -v rpool
 zpool list rpool

Troubleshooting

grub-probe: failed to get canonical path of /boot/grub/fonts/UbuntuMono16.pf2

The GRUB font is missing from one or both ESPs. Re-copy to each ESP:

 mount /dev/disk/by-uuid/<ESP-UUID> /mnt
 mkdir -p /mnt/grub/fonts /mnt/grub/themes
 cp /boot/grub/fonts/UbuntuMono16.pf2 /mnt/grub/fonts/
 cp -r /boot/grub/themes/linuxmint /mnt/grub/themes/
 umount /mnt
 dpkg --configure -a

grub-probe: Unknown filesystem

/boot/grub/grub.cfg exists on ZFS, causing zz-update-grub to run update-grub which fails on ZFS. Remove it:

 mv /boot/grub/grub.cfg /boot/grub/grub.cfg.bak
 dpkg --configure -a

dpkg broken packages (plymouth, plymouth-themes, debian-system-adjustments)

These always cascade from one of the two grub errors above. Fix the grub issue first, then run: dpkg --configure -a


References

  • Proxmox Forum: PVE9 + Cinnamon LMDE7 tutorial
 https://forum.proxmox.com/threads/developer-workstation-proxmox-ve-9-with-cinnamon-lmde7.174154/
  • Proxmox Wiki: Host Bootloader / proxmox-boot-tool
 https://pve.proxmox.com/wiki/Host_Bootloader