Overview / Motivation
This post pertains to OVH’s new VPS line as of mid 2025, running on Haswell CPUs, with plans named “VPS-n”.
OVH does not offer Proxmox VE as an OS option on these plans, and does not have any officially-sanctioned method to use a custom ISO. I find this disappointing and frustrating. Even when a host provides a template for the OS I want to use, I like to provide my own ISO to avoid whatever “customizations” or borderline-spyware the host may have included in their template.
While OVH provides a Debian 12 template and PVE can be installed atop a standard Debian installation, I simply Did Not Want To Do That, and wanted a real install, from scratch, from a PVE ISO.
Installation
⚠️⚠️⚠️ WARNING ⚠️⚠️⚠️
You probably should not do this. In fact, just, please, do not do this. If you need me to explain why you shouldn’t do this, then you really should not do it. The following procedure worked for me, but obviously I cannot provide any guarantees or support of any kind.
With that being said, here’s how to do it. We will use a QEMU VM to run the PVE installer, but install directly to the VPS’s disk.
Boot the VPS into rescue
- Log into the OVH control panel and find your VPS
- In the Boot section, click the […] menu, and select Reboot in rescue mode
- Confirm, and then wait to receive an email with the login credentials
Login to rescue
SSH to the VPS’s IP using the username and password given in the email. We will need to connect a client later to access the QEMU VM, so forward a port: ssh -L5959:127.0.0.1:5900 root@<VPS IP>
You will be at a shell in OVH’s Debian-based rescue system.
Collect system information
Collect a few pieces of system information and save them for later.
ip a- save the IPv4 address on the primary interface (was ens3 for me)ip r- save the default gateway addressgrep nameserver /etc/resolv.conf- save the DNS server addresslsblk- find the device name for the VPS disk (was /dev/sdb for me), and make sure it’s not currently mounted
Install QEMU and set up the VM
apt updateapt install qemu-system-x86 --no-install-recommendscd /dev/shmwget https://enterprise.proxmox.com/iso/proxmox-ve_9.0-1.iso(use whatever ISO URL is current at the time you’re doing this)qemu-system-x86_64 -enable-kvm -netdev type=user,id=mynet0 -device virtio-net-pci,netdev=mynet0 -m 4G -drive file=<DISK DEVICE NAME>,format=raw,if=virtio -vga qxl -spice port=5900,addr=127.0.0.1,disable-ticketing=on -daemonize -cdrom <PATH TO ISO> -boot d
The VM is now running in the background.
Connect from your workstation
remote-viewer spice://127.0.0.1:5959
On Debian, you can get this tool by installing the virt-viewer package.
Install Proxmox VE
Proceed through the Proxmox VE installation as normal. Ignore the warning about no hardware virtualization.
It will DHCP an RFC 1918 IP, so you should manually set the IP, gateway, and DNS server you collected earlier.
When the PVE install is finished, do not reboot when prompted.
Reboot the VPS
- Back on the OVH VPS control panel, in the Boot section, click the […] menu, and select Reboot my VPS
- Now in the Name section, click the […] menu, and select KVM
You should now be watching your VPS boot into Proxmox VE.
Notes
There are a lot of things that can go wrong with this. My primary concern is that OVH might update their wacky boot system and PVE would no longer be bootable. This would probably be fixable by mounting the VM disk from the rescue environment and repairing it, but it is just something to be aware of.
This might also work for other OSes, but I haven’t tested it. You will be limited by the size of the ISO as we’re storing it in memory temporarily. The available memory is the VPS RAM size minus 2GB for OVH’s rescue image, since it runs from RAM - note this size might change in the future when OVH updates their rescue image. Luckily, these new VPSes have a ton of memory, and everything except the most extravagant/bloated OS ISOs should be able to fit in the remaining memory without a problem.
This guide was adapted in part from this StackExchange post.