Pods
Pritunl Cloud markdown spec defined resources
Pods are groups of units. Each unit will have a history of changes to the spec and a set of deployments for that unit.
Units
Units should be unique in each organization. This avoids conflicts on queries that are done by unit name only. Additionally resources created by the unit will use the unit name.
Commits
When changes are made to a unit other then the count
variable a new commit will be made. This commit can then be used for new deployments or migrated to existing deployments. When a deployment is migrated it will only run the {phase=reload}
right after the migration and {phase=reboot}
on the next reboot. It will not run the initial phase blocks.
Deployments
Deployments represent instances deployed with the unit spec for instance units. In the case of image units an image of a spec commit or build in progress. Deployments can be archived and restored by selecting the deployment and using the settings menu. An archived deployment will power off the instance and release resources such as persistent disks. The deployment can later be restored to start the instance again. This is useful for migrating to new spec versions without losing the state of the previous commit. In the deployment settings the tags can be set. Tags allow applying selector tags to image builds that can then be referenced in a spec. Tags on instance deployments currently don't have any use in spec files.
Resource References
All resource references use the format +/kind/name/key
not all resources have key value data and not all contexts will support referencing key value data. In the event that multiple resources have the same name the server will attempt to match to the correct resource. For example if multiple VPCs in different datacenters have the same name it will look for the VPC that is in the same datacenter as the instance. In some cases resources are intended to share the same name such as with persistent disks. In a distributed database where multiple instances are launched each needing a persistent disk to store database data multiple disks can be created with the same name. Then in the mounts spec the server will select one of the disks for each deployment until all the available disks are consumed at which point further deployments will be rejected.
Phases
The phase of each block determines when that block will run. Below are the three phase types, if no phase type is set it will default to initial
. The phase is set using a Markdown block attribute. Python and Shell is supported for all phases. The blocks will always be run in first to last order. This would include templates where there is a initial -> reload -> initial
block set. Currently errors will not stop a deployment. By default shell blocks will continue to run on an error unless set -e
is included at the top of the block. Python blocks will stop on error but subsequent blocks will be run.
```shell {phase=initial}
dnf -y install nginx
```
The initial phase will run when the instance deployment first starts. The initial code blocks will only run once. This includes when an existing deployment is migrated to a new spec, the new initial blocks will not run. If a migration requires running new initial code it should be run manually or worked into a reboot or reload block.
```shell {phase=reboot}
systemctl start nginx
```
The reboot phase will run every time the instance is restarted. This includes restarts initiated by the instance with sudo reboot
and reboots started from the web console. The Pritunl Cloud agent will pull the current template spec from the IMDS on startup. Even if the template is modified with a migrate a reboot will always run the correct template.
```python {phase=reload}
import sys
print(sys.version)
```
The reload phase will run any time a change to the related state occurs. This could be a different pod adding deployments that are referenced in the template or state changes with the host. The reload phase should be expected to run at any time. Due to the complexity of a large state the reload phase will often run unnecessarily when there are no changes that impact the template. For this reason CPU intensive tasks should not always run in a reload phase. Instead the code should first compare relevant data with the system configuration to detect if the reload has an actual impact on the configuration. One way to do this is to write a new configuration file then compare the hash to the current configuration file to determine if a disruptive action such as a service restart is needed.
The phase runs are synchronous and new phase runs will be queued. If the state changes while the initial, reboot or reload phase is finishing the reload phase will run again once that phase has completed. This will not stack multiple phases in the queue, if two state changes occur before a phase completes only one additional reload phase will run. The data from the IMDS service will always be the latest data, it does not capture the data as it was at the start of a reload phase. This data should be expected to change during a phase run. Although if it were to change during a phase run this would trigger a queued reload which could then capture the complete consistent state.
Environmental Variables
Environmental variables are used to share data between Python and shell code. The export()
function is added by default to the Python engine. This will export the variable to the shell environment and future python environments.
```python {phase=reload}
import requests
url = "https://ipapi.co/json"
response = requests.get(url)
public_ip = response.json()["ip"]
export("public_ip", response.json()["ip"])
```
```shell {phase=reload}
echo "BASH> public_ip:" $public_ip
export kernel_version=$(uname -r)
```
```python {phase=reload}
import os
print("PYTHON> kernel_version:", os.getenv("kernel_version"))
```
Instance Spec
An instance spec configures the instance and the resources that the instance agent will have access to. Below is a complete instance spec with all the available options.
name: userspace-web
kind: instance
count: 4
plan: +/plan/web
zone: +/zone/us-west-1a
node: +/node/east0
shape: +/shape/m2-small
vpc: +/vpc/vpc
subnet: +/subnet/primary
roles:
- instance
processors: 2
memory: 4096
uefi: true
secureBoot: true
cloudType: linux
tpm: true
vnc: false
deleteProtection: true
skipSourceDestCheck: false
hostAddress: true
publicAddress: false
publicAddress6: true
dhcpServer: false
image: +/image/almalinux10:latest
diskSize: 30
mounts:
- name: host-data
type: host_path
path: /var/lib/data
hostPath: /mnt/data
- name: persistent-disk
type: disk
path: /var/lib/database
disks:
- +/disk/database-data
nodePorts:
- protocol: tcp
externalPort: 32120
internalPort: 80
- protocol: udp
externalPort: 32140
internalPort: 1024
certificates:
- +/certificate/dev-pritunl-com
secrets:
- +/secret/database
pods:
- +/pod/web-app
Below is detailed information for each option.
name: instance-name
[required] This will control the name the instance will have and the name of the boot disk attached to the instance. When possible this should be unique to the entire cluster.
kind: instance
[required] This must always be set to instance
when creating an instance spec. A unit can only contain one instance.
count: 4
[optional] This controls the number of deployments to automatically maintain. Currently it is better to manually handle creating deployments from the deployments view until the deployment and plan spec system is further developed.
plan: +/plan/web
[optional] This references a plan by name to apply to the deployments using this spec.
zone: +/zone/us-west-1a
[required] Controls what zone the instances in this deployment will be created in. Currently one spec cannot create instances in multiple zones. This will be supported in the future.
node: +/node/east0
[optional/required] This controls what node by name the instances are deployed to. This is optional if a shape is specified otherwise it is required. It can be included with a shape as long as the node is available in that shape. In this case the shape will control the processor and memory size while the node will be explicit.
shape: +/shape/m2-small
[optional/required] This controls the shape that the instance will use. This is required if the node is not set. If a shape is set without a node set the node in that shape with the most available resources will be selected.
vpc: +/vpc/vpc
[required] Controls what VPC the instance will be added to.
subnet: +/subnet/primary
[required] Controls what subnet the instance is added to. Must be a subnet of the instance VPC.
roles:
- instance
[optional] Controls what roles will be added to the instance. This supports a list of strings. Typically this will be used to control what SSH authorities are added to the instance and some base firewall rules. It is recommended to control application specific firewall rules using a firewall spec in the unit file.
processors: 2
[optional] Controls how many processors the instance will have. This can be overridden by the shape if the shape is not a flexible shape.
memory: 4096
[optional] Controls how much memory in megabytes the instance will have. This can be overridden by the shape if the shape is not a flexible shape.
uefi: true
[optional] Enables or disables the UEFI BIOS. This should be excluded except when using custom images as all Pritunl Cloud images use UEFI.
secureBoot: true
[optional] Enables or disables secure boot. If not included this is automatically enabled when supported by the image. This is the case for all Pritunl Cloud images except for Alpine Linux and FreeBSD.
cloudType: linux
[optional] Controls what cloud-init format to use. This can be either linux
or bsd
for Linux images and FreeBSD images. This should be excluded as it is automatically detected by default.
tpm: true
[optional] Enables or disables the software TPM. This is disabled by default for all instances.
vnc: false
[optional] Enables or disables the VNC service. This can be viewed from the web console in the instances tab. Disabled by default.
deleteProtection: true
[optional] Prevents the instance and instance boot disk from being deleted. The delete protection must be disabled before the instance or deployment can be deleted.
skipSourceDestCheck: false
[optional] This will allow network traffic that is from different addresses and networks. This can be used for VPN or network appliance instances to handle routing other traffic. Disabled by default.
hostAddress: true
[optional] Controls if the instance will be given an IP address on the host network. This should never be disabled except for specific use cases or debugging. Enabled by default.
publicAddress: false
[optional] Controls if the instance will be given a public IPv4 address. The default for this is determined by the Default instance public IPv4 address option in the node settings.
publicAddress6: true
[optional] Controls if the instance will be given a public IPv6 address. The default for this is determined by the Default instance public IPv6 address option in the node settings.
dhcpServer: false
[optional] Controls if the instance DHCP server will be enabled. This is a DHCP server that runs in the instances network namespace to supply the instance with an IPv4 address using DHCP. This should only be used when the instance operating system does not support cloud-init. Disabled by default.
image: +/image/almalinux10:latest
[required] Base image for instance. This can be a Pritunl Cloud supplied instance by using the image
resource or a build image by using the build
resource. For Pritunl Cloud images a specific date tag can be supplied or use latest
to use the current latest image. If no tag is included it will use the latest image. For builds the build is referenced by name and can use the automatic latest
tag to use the latest build. Other build tags must be manually added by clicking settings on the image build and adding a tag.
diskSize: 30
[optional] Controls the size of the instance boot disk in gigabytes. If left blank the default size of 10gb will be used.
mounts:
- name: host-data
type: host_path
path: /var/lib/data
hostPath: /mnt/data
- name: persistent-disk
type: disk
path: /var/lib/database
disks:
- +/disk/database-data
[optional] Mounts allow mounting either a disk or host path. When mounting a disk the disk must be created with the *File System* option set. This stores the partition UUID to allow the Pritunl Cloud agent to mount the partition. For host paths the host path or a parent directory of the path must first be made available to the organization from the host settings.
nodePorts:
- protocol: tcp
externalPort: 32120
internalPort: 80
- protocol: udp
externalPort: 32140
internalPort: 1024
[optional] Node ports allowing accessing services on the pod instance from the public IP address of the node. This only works on the node that is running the instance
certificates:
- +/certificate/dev-pritunl-com
[optional] This will make certificates available to the Pritunl Cloud agent. It does not store the certificate on the instance disk. The commands sudo pci get +/certificate/dev-pritunl-com/certificate
and sudo pci get +/certificate/dev-pritunl-com/certificate
can then be used to retrieve the certificate. If the certificate is changed or deleted and recreated with the same name the data supplied by the commands will be updated. An updated certificate should refresh within a few seconds if the certificate is deleted and recreated the update could take 1-3 minutes.
secrets:
- +/secret/database
[optional] This will make secrets available to the Pritunl Cloud agent. Generally only JSON type secrets should be used as these provide the most flexible usage. The other secret types are intended for use within Pritunl Cloud resources such as domain and certificate management. Use the command sudo pci get +/secret/database/data.secret
to get data from the secret. The data
value with dot notation is used to extract keys from a JSON type secret.
pods:
- +/pod/web-app
[optional] This makes pod and unit information available to the Pritunl Cloud agent. The pod that the unit exists in is available by default. Once a pod has been added to the spec the unit information becomes available with the sudo pci get +/unit/<name>/key
command. The data available with these commands is explained in the IMDS documentation section.
Image Spec
Image specs have the same options as an instance spec although some have no effect on images. This includes options like count
and plan
. After creating an image spec the menu will show the builds label instead of deployments. From this section new builds can be started from the menu. Then tags can be added to each build in the settings to utilize the image for other instance units. The image unit does not need to be in the same pod and as the unit using the image. The image builds will be available to all pods in the same organization. When using image builds in a spec the resource group +/build/
is used instead of +/image
. This is followed by the unit name then optionally a tag. The reserved tag latest
will always reference the last build completed. If no tag is included the latest build will be used.
image: +/build/web-app:latest
image: +/build/web-app:1.2.0
Firewall Spec
Below is a complete firewall spec with all the available options. This should be included after an instance spec to control firewall rules for that instance.
name: web-server-firewall
kind: firewall
ingress:
- protocol: tcp
port: 22
source:
- 10.0.0.0/8
- protocol: tcp
port: 80
source:
- +/unit/load-balancer/private_ips
- +/unit/load-balancer/private_ips6
- +/unit/load-balancer/public_ips
- +/unit/load-balancer/public_ips6
- +/unit/load-balancer/cloud_private_ips
- +/unit/load-balancer/cloud_public_ips
- +/unit/load-balancer/cloud_public_ips6
- +/unit/load-balancer/host_ips
- protocol: udp
port: 10000-20000
source:
- 10.0.0.0/8
Below is detailed information for each option.
name: web-server-firewall
[required] Currently this is only used as a label in the spec. Firewall specs do not create any resources that would utilize the name.
kind: firewall
[required] This must always be set to firewall
. A unit can only contain one firewall spec.
ingress:
[required] This contains a list of ingress rules as documented below. This can include any number of rules including rules with the same protocol and port.
ingress:
- protocol: tcp
[required] This sets the protocol to apply the rule to. Can be all
to allow all traffic, icmp
for both ICMPv4 and ICMPv6 traffic, tcp
for TCP traffic, udp
for UDP traffic, multicast
for multicast traffic and broadcast
for broadcast traffic. Only one protocol can be set.
ingress:
- port: 80
[optional/required] Sets the ports allowed by this rule. If the protocol
is all
or icmp
this cannot be included for all other protocols it must be included. It can be either a single port or a range of ports such as 10000-20000
.
ingress:
- source:
- 10.0.0.0/8
[required] A list of sources to allow traffic from on this rule. This can be either a CIDR network range or a unit reference.
Domain Spec
Below is a complete domain spec with all the available options. This should be included after an instance spec to automatically manage DNS entries for deployments. If there are multiple deployments with the same domain all the IP addresses of the deployments will be added to the DNS A or AAAA record. The domain must first be created and configured in the same organization as the pod with an API in the domains section.
---
name: web-server-domain
kind: domain
records:
- name: web
domain: +/domain/pritunl-dev
type: public
- name: service
domain: +/domain/pritunl-dev
type: private
Below is detailed information for each option.
name: web-server-domain
[required] Currently this is only used as a label in the spec. Domain specs do not create any resources that would utilize the name.
kind: domain
[required] This must always be set to domain
. A unit can only contain one domain spec.
records:
[required] This contains a list of DNS records as documented below. This can include any number of records.
records:
- name: web
[required] This sets the subdomain of the record. In this example it will create the web
subdomain under the +/domain/pritunl-dev
which will form the domain web.pritunl.dev
. A dot can be included in the name for nested subdomains.
records:
- domain: +/domain/pritunl-dev
[required] This sets the domain resource to create the DNS record in.
records:
- type: public
[required] This determines the IP address type to use for the record. Below are the available values for this option.
private - Instance private VPC IPv4 address
private6 - Instance private VPC IPv6 address
public - Instance public IPv4 address
public6 - Instance public IPv6 address
cloud_public - Oracle Cloud public IPv4 address
cloud_public6 - Oracle Cloud public IPv6 address
cloud_private - Oracle Cloud private VCN IPv4 address
host - Instance host IPv4 address
Last updated