Vultr

Using Pritunl Boot with Vultr Bare Metal

It is recommended to first create a VPC Network before creating the bare metal server. This will allow configuring the VPC interface during the install.

Vultr Bare Metal uses iPXE chain URLs with DHCP support. Currently Vultr bare metal servers do not have iPXE HTTPS support. To begin select a Linux Distribution then set Configuration Mode to Interactive. Set Bare Metal Provider to Vultr and paste SSH keys into the prompt.

Once the configurations is complete click Generate iPXE Install. This will then display a page with an iPXE Chain URL. This URL will be used in the next step. The other information on the page is not needed.

From the Vultr management console create a new bare metal server and in the configuration step select iPXE. Copy the iPXE Chain URL from the previous step. Leave SSH Keys empty and set Disk Configuration to No RAID. These options will have no effect on a iPXE install. Enter a server name then select both Public IPv6 and VPC Network unless these options will not be needed. After VPC Network is enabled select one of the VPC networks. Then click Deploy.

After about 10-20 minutes the server will load the iPXE script. This will then load the installer for the selected Linux distribution. Once the installer reaches the pre-installation stage the script will send the system hardware information to the Pritunl Boot service. This will then be displayed on the web page. The page automatically refreshes and does not need to be reloaded. Once this occurs open the Vultr server settings page and copy the IPv4, IPv6 and VPC Network information shown below. This information will be needed for the next step.

There is a bug with the servers sometimes not having VPC networking enabled on the first boot. If the VPC Network is not shown on this page do not attempt to enable it while the installer is running. This will cause the install to fail. It will need to be enabled and configured after the install.

Generally if the server has /dev/sda and /dev/sdb disks these must be used as the install drives even if there are also NVME drives. This is do to UEFI being disabled.

For this configuration a RAID 1 will be created on the two selected drives with a 50GB root partition. This will create a 100MB EFI partition and 50GB root partition. An EFI partition allows including the /boot mount on the root filesystem. Limiting the size of the root partition allows using the remaining space to create an encrypted partition.

For the Public Network configuration select the interface that is displayed with the public IP. Then set the Network Configuration to Static and copy the public IP address from the Vultr server settings into the Public IPv4 field. The CIDR will need to be appended to this address in the format <public_ip>/cidr. This should be shown in the network interfaces above otherwise it will need to be calculated from the netmask shown in the Vultr settings. Copy the gateway shown into Gateway IPv4.

If IPv6 was enabled copy the IPv6 address shown in the Vultr settings into Public IPv6 field with the CIDR /64 appended in the format <public_ip6>/64, the other IPv6 fields should be left blank.

Once the public network is configured switch to the Private Network tab and select the other network interface. Set the Private Network Configuration to Static. Then copy the VPC address shown in the Vultr server settings and append the CIDR in the format <private_ip>/cidr the CIDR will need to be calculated from the netmask shown. Set the Network MTU to 8850. Once done click Start Install.

The installer will then go through several stages which will be shown from the Pritunl Boot web app. Once the installation has completed. The message below will be shown with the ssh command to connect to the server. The Pritunl Boot installation will always use the username cloud with SSH key authentication and a disabled root account.

If the VPC networking did not complete during the install enable it in the bare metal server settings. Then run the commands below with the VPC IP address and CIDR. Run ip -a to get the interface name of the second interface that is not assigned a public IP.

IFACE_NAME="eno2np1"
VPC_IP="10.100.0.3/16"
sudo nmcli connection add type ethernet con-name "$IFACE_NAME" ifname "$IFACE_NAME" connection.autoconnect yes
sudo nmcli connection modify "$IFACE_NAME" 802-3-ethernet.mtu 8850
sudo nmcli connection modify "$IFACE_NAME" ipv4.method manual +ipv4.addresses "$VPC_IP"
sudo nmcli connection up "$IFACE_NAME"

Last updated