Vultr

Configure Pritunl Cloud on Vultr

This tutorial will create a single host Pritunl Cloud server on Vultr bare metal with public IPv6 addresses for each instance. For multi-host clusters it is recommend to use a dedicated MongoDB server.

Create Vultr Server

Select Bare Metal Instance and set the Server Type to CentOS 8. Enable RAID 1 and IPv6. Then set a SSH Key and click Deploy Now

Configure Vultr Server

Connect to the server with SSH using the root user. Then run the commands below to optimize the server and configure the firewall.

sudo tee /etc/sysctl.d/10-dirty.conf << EOF
vm.dirty_ratio = 3
vm.dirty_background_ratio = 2
EOF

sudo tee /etc/sysctl.d/10-swappiness.conf << EOF
vm.swappiness = 10
EOF

sudo tee /etc/security/limits.conf << EOF
* hard nofile 500000
* soft nofile 500000
root hard nofile 500000
root soft nofile 500000
EOF

sudo yum -y update
sudo yum -y remove cockpit-ws

sudo systemctl disable firewalld
sudo systemctl stop firewalld

sudo yum -y install chrony
sudo systemctl start chronyd
sudo systemctl enable chronyd

sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux
sudo setenforce 0

sudo sed -i '/^PasswordAuthentication/d' /etc/ssh/sshd_config

Run the commands below to enable automatic updates.

sudo dnf -y install dnf-automatic
sudo sed -i 's/^upgrade_type =.*/upgrade_type = default/g' /etc/dnf/automatic.conf
sudo sed -i 's/^download_updates =.*/download_updates = yes/g' /etc/dnf/automatic.conf
sudo sed -i 's/^apply_updates =.*/apply_updates = yes/g' /etc/dnf/automatic.conf
sudo systemctl enable --now dnf-automatic.timer

Install Podman MongoDB (Optional)

This tutorial will run MongoDB in a Podman container with host networking. This will provide isolation and limit the memory usage of the MongoDB server. Running a Podman container without --network host will break the Pritunl Cloud networking. The commands below will install Podman and start a MongoDB service that will run on localhost:27017 with a 1024m memory limit. The MongoDB database data will be stored in the /var/lib/mongo directory.

sudo yum -y install podman

sudo mkdir -p /var/lib/mongo
sudo podman run -d --name mongo --network host --cpus 1 --memory 1024m --volume /var/lib/mongo:/data/db mongo --bind_ip 127.0.0.1

sudo tee  /etc/systemd/system/mongo.service << EOF
[Unit]
Description=MongoDB Podman container
Wants=syslog.service

[Service]
Restart=always
ExecStart=/usr/bin/podman start -a mongo
ExecStop=/usr/bin/podman stop -t 10 mongo

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable mongo
sudo systemctl start mongo

Configure Bridge Interfaces

Pritunl Cloud requires bridge interfaces for instance networking. First open the Vultr server Settings tab and get the IP addresses for the server.

From the Main IP copy the Address to the IPADDR, the Netmask to NETMASK and Gateway to GATEWAY in the script below. Run ip addr to get the interface name and replace enp1s0 below if it is different. Ensure the values are entered correctly, if misconfigured access to the server will be lost.

sudo tee /etc/sysconfig/network-scripts/ifcfg-enp1s0 << EOF
TYPE="Ethernet"
BOOTPROTO="none"
NAME="enp1s0"
DEVICE="enp1s0"
ONBOOT="yes"
BRIDGE="pritunlbr0"
EOF
sudo tee /etc/sysconfig/network-scripts/ifcfg-pritunlbr0 << EOF
TYPE="Bridge"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
IPADDR=144.202.97.211
NETMASK=255.255.254.0
GATEWAY=144.202.96.1
DEFROUTE="yes"
NAME="pritunlbr0"
DEVICE="pritunlbr0"
ONBOOT="yes"
EOF

Once done restart the server by running sudo restart. After restarting ip addr should show two bridge interfaces.

Install Pritunl Cloud

Run the commands below to install Pritunl and QEMU from the Pritunl KVM repository. The directory /var/lib/pritunl-cloud will be used to store virtual disks, optionally a different partition can be mounted at this directory. If you have disks that will be dedicated to the virtual machines these should be mounted at the /var/lib/pritunl-cloud directory.

sudo tee /etc/yum.repos.d/pritunl-kvm.repo << EOF
[pritunl-kvm]
name=Pritunl KVM Repository
baseurl=https://repo.pritunl.com/kvm/
gpgcheck=1
enabled=1
EOF

gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 1BB6FBB8D641BD9C6C0398D74D55437EC0508F5F
gpg --armor --export 1BB6FBB8D641BD9C6C0398D74D55437EC0508F5F > key.tmp; sudo rpm --import key.tmp; rm -f key.tmp

sudo yum -y remove qemu-kvm qemu-img qemu-system-x86
sudo yum -y install edk2-ovmf pritunl-qemu-kvm pritunl-qemu-img pritunl-qemu-system-x86

sudo tee /etc/yum.repos.d/pritunl.repo << EOF
[pritunl]
name=Pritunl Repository
baseurl=https://repo.pritunl.com/stable/yum/oraclelinux/8/
gpgcheck=1
enabled=1
EOF

gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 7568D9BB55FF9E5287D586017AE645C0CF8E292A
gpg --armor --export 7568D9BB55FF9E5287D586017AE645C0CF8E292A > key.tmp; sudo rpm --import key.tmp; rm -f key.tmp

sudo yum -y install pritunl-cloud

Run the command below to get the default password.

sudo pritunl-cloud default-password

Configure Pritunl Cloud

In the Users tab select the pritunl user and set a password. Then add the org role to Roles and click Save.

In the Storages tab click New. Set the Name to pritunl-images, the Endpoint to images.pritunl.com and the Bucket to stable. Then click Save. This will add the official Pritunl images store.

In the Organizations tab click New. Name the organization org, add org to Roles and click Save.

In the Datacenters tab click new and name the datacenter us-west-1 then add pritunl-images to Public Storages.

In the Zones tab click New and set the Name to us-west-1a. Set the Network Mode to VXLAN.

In the Blocks tab click New and set the Name to host0. These blocks will be used for the host internal network, this network allows the host to communicate with the instances and to provide NAT access to the internet. Set the Network Mode to IPv4 and Netmask to 255.255.255.0. Then add 10.187.1.0/24 to the IP Addresses. Set the Gateway to 10.187.1.1.

In the Vultr management console copy the Network from the IPv6 tab of the server settings.

In the Blocks tab click New and set the Name to host0ip6. Set the Network Mode to IPv6 and use the IPv6 network from above with /64 cidr. Set the Gateway to the to the server IPv6 Address from above.

In the Nodes tab use the Hostname to match the node to the correct IP block and physical server. Set the node Zone to us-west-1a and the Network Mode to Internal Only. Add the pritunlbr0 to the Internal Interfaces. Set the Network IPv6 Mode to Static. Add pritunlbr0 with the matching hosts IP block such as host0ip6 to the External IPv6 Block Attachments. Set the Host Network Block to the matching host block such as host0. Enable Host Network NAT.

In the Firewalls tab click New. Set the Name to instance, set the Organization to org and add instance to the Network Roles.

In the Authorities tab click New. Set the Name to cloud, set the Organization to org and add instance to the Network Roles. Then copy your public SSH key to the SSH Key field. If you are using Pritunl Zero or SSH certificates set the Type to SSH Certificate and copy the Public Key form the SSH authority in Pritunl Zero to the SSH Certificate field. Then add roles to control access.

In the VPCs tab enter 10.97.0.0/16 in the network field and click New. Add 10.97.1.0/24 to the Subnets with the name primary. Then set the Name to vpc and click Save.

In the Instances tab click New. Set the Name to test, set the Datacenter to us-west-1, set the Zone to us-west-1a and set the Node. Set VPC to vpc and the Subnet to primary. Add instance to the Network Roles and set the Image to oraclelinux8_1912.qcow2. If no images are shown check the Storages and restart the pritunl-cloud service. Click Create twice to accept the Oracle license. Repeat this again to create instances on each node.

Once the instance has started SSH into the instance using the Public IPv6 address and SSH key configured with the username cloud. Due to a networking issue you may need to first ping the public IPv6 address before using SSH. Verify that VPC traffic works between instances using ping to the Private IPv4 address of each instance.