# Simple Web Server Pod

This guide will create a simple Nginx web server shared via node port 32120, then configures a Cloudflare load balancer to route traffic to the server.

This tutorial assumes instances cannot have public IP addresses. This would be the case when running Pritunl Cloud in a virtualized environment such as an Azure virtual machine. Typically in a production deployment on bare metal some instances would be given public IPv4 addresses to function as load balancers to instances on the same VPC. Preferably all instances should be given IPv6 addresses to allow for easier management and access. Despite this the pass through mechanisms in Pritunl Cloud including node ports are well optimized and can also be used for production traffic with a Cloudflare

### Configure Pod

Open the *Pods* tab and click *New* then name the pod `test`, set the organization to `org` and click *Create*. This pod can then contain multiple units each with a spec file.

<figure><img src="/files/ZZwk2oLB33A1vBwfHSqB" alt=""><figcaption></figcaption></figure>

Then select the new pod and click *New Unit*. Copy the unit spec from below and click *Save*. This pod spec uses the default resource names that are created during the installation. All of these resources will be explained later in the documentation and can be renamed or modified.

The first `yaml` code block in a spec file will always be parsed as the resource specification. Any additional `yaml` blocks will be ignored by Pritunl Cloud. When specifying multiple resource specs they must be separated by `---` as shown below to separate the instance resource from the firewall.

This pod unit will map port 80 of the instance to port 32120 of the host node. Multiple deployments can exist on one host with node ports. When this occurs Pritunl Cloud will automatically configure a IPVS load balancer. This will be demonstrated by modifying the Nginx index to include the instance ID. This is done using the `pci get +/instance/self/id` command.

The `pci` command is an instance agent and CLI tool used to allow the instance to communicate with the IMDS server on the host. This tool runs the spec file and provides system information including CPU, memory and log output to Pritunl Cloud. The `get` sub-command is used to query information from the IMDS server. In this case the `+/instance/self/id` will return the instance ID of itself.

This spec also shows the usage of `shell` and `shell {phase=reboot}` when the Pritunl Cloud Agent parses spec files it will run all shell code blocks. Any other blocks or data outside blocks is ignored. This allows including documentation and other data in the spec file. A `shell` block with no phase specified will only run once when creating a new deployment. A `shell` block with `{phase=reboot}` will run every time the instance is started including the initial run.

A firewall ruleset is also shown in this spec. This will allow SSH traffic to the instance from any IP address. Node ports do not need to be opened on the instance firewall but the host nodes will need to allow traffic on the node port range `30000-32727`. **If a firewall is configured on the host or security groups external to the host verify TCP ports `30000-32737` are open. For a more secure configuration the node port range can be opened only to Cloudflare's public IP ranges.**

The journal definition will stream the nginx Systemd logs to Pritunl Cloud which can be viewed using the logs drop down menu in the deployment view.

````yaml
```yaml
---
name: nginx
kind: instance
zone: +/zone/us-west-1a
shape: +/shape/m2-small
processors: 2
memory: 2048
vpc: +/vpc/vpc
subnet: +/subnet/primary
image: +/image/oraclelinux9
roles:
    - instance
nodePorts:
    - protocol: tcp
      externalPort: 32120
      internalPort: 80

---
name: nginx-firewall
kind: firewall
ingress:
    - protocol: tcp
      port: 22
      source:
        - 0.0.0.0/0

---
name: nginx-logs
kind: journal
inputs:
    - key: nginx
      type: systemd
      unit: nginx
```

## Install Nginx

```shell
dnf install -y nginx
sed -i "s/Test Page/cloud-$(pci get +/instance/self/id)/" /usr/share/nginx/html/index.html
```

## Start Nginx

```shell {phase=reboot}
systemctl start nginx
```
````

Once complete the spec should be displayed in formatted markdown. This will highlight code blocks based on the phase.

<figure><img src="/files/s9uvdb9gxihtaNbOUYgx" alt=""><figcaption></figcaption></figure>

Click *Settings* to open the pod menu and click *View Deployments*. Then open the menu again and click *New Deployment*. Leave the default values for the deployment and click *Create*.

<figure><img src="/files/mvabigonjtrBP87TEGma" alt=""><figcaption></figcaption></figure>

This will then start the deployment. The first deployment will need to download the base image, the progress of this can be viewed from the *Instances* page or by clicking *View Instance*. Base images will be cached for faster future deployments. Click *Logs* on the new deployment to watch the status of the spec commands.

<figure><img src="/files/TcB0R0YlrWJ375SgRIJI" alt=""><figcaption></figcaption></figure>

This will event reach `systemctl start nginx` indicating the nginx server has started. Verify the web server is accessible by opening `http://<server_ip>:32120` this should display the Nginx page with the instance ID in the modified title. **Node ports in Pritunl Cloud only forward traffic from the host to instances running on that local host.**

<figure><img src="/files/kI313KRbSfqt4LDn6q16" alt=""><figcaption></figcaption></figure>

Open the menu again and click *New Deployment* then *Create*. This will create a second deployment of this same spec. Click *Logs* on the new deployment and wait for it to complete. The Pritunl Cloud host will cache base images allowing the second deployment to initialize faster.

<figure><img src="/files/THWJFlILW5J7UnzHtn7Z" alt=""><figcaption></figcaption></figure>

In the web browser open Chrome Developer Tools and press `Ctrl+Shift+R` this will trigger a hard reload. The browser will often still cache the page but after several refreshes the instance ID of the second deployment should be shown. This will indicate the node port load balancing is working.

<figure><img src="/files/skkbACYwpVklvX2AiNHM" alt=""><figcaption></figcaption></figure>

It won't be apparent from the browser due to cache but it uses round robin load balancing. This can be reliably verified on Linux with the command `curl -s http://<server_ip>:32120 | grep cloud` as shown below.

```sh
cloud@dev:~$ curl -s http://<server_ip>:32120 | grep cloud
	<title>cloud-688db9219da165ffad4e439d for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688db9219da165ffad4e439d</strong></h1>
cloud@dev:~$ curl -s http://<server_ip>:32120 | grep cloud
	<title>cloud-688dbc759da165ffad4e4ab1 for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688dbc759da165ffad4e4ab1</strong></h1>
cloud@dev:~$ curl -s http://<server_ip>:32120 | grep cloud
	<title>cloud-688db9219da165ffad4e439d for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688db9219da165ffad4e439d</strong></h1>
cloud@dev:~$ curl -s http://<server_ip>:32120 | grep cloud
	<title>cloud-688dbc759da165ffad4e4ab1 for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688dbc759da165ffad4e4ab1</strong></h1>
```

### Configure Cloudflare Load Balancer

These next steps are optional but will demonstrate how to get the server open to the internet with HTTPS using a Cloudflare load balancer. This will require a Cloudflare account with a domain and the $5/month load balancer service. From the Cloudflare dashboard select *Load Balancing* and click *Create load balancer*. Select *Public load balancer* and select a domain to use then click *Next*. Fill a sub-domain into the *Hostname* field and click *Next*.

<figure><img src="/files/kAjUk2cBaWEutNC38POF" alt=""><figcaption></figcaption></figure>

On the pools page click *Create a pool* then set the name. Remove the second endpoint input then set the first endpoint name, set the *Endpoint Address* to the Pritunl Cloud node public IP, set the *Port* to 32120 and *Weight* to 1. Then click *Save*.

<figure><img src="/files/rXsZa20oPnrNSjaUExZf" alt=""><figcaption></figcaption></figure>

Set the *Fallback Pool* to the same pool that was just created and click *Next*. Continue to click *Next* and skip the remaining optional options. Then click *Save and Deploy*. Avoid loading the domain immediately after deploying the load balancer as this can cache the incomplete DNS lookup. Wait a few minutes or run `dig @8.8.8.8 <domain>` to verify the DNS entry is complete. Opening this domain with HTTPS should now display the same Nginx default page with the instance ID. The round robin balancing between the two deployments should continue to work when testing with the curl command as shown below.

```sh
cloud@dev:~$ curl -s https://web-dev.pritunl.com/ | grep cloud
	<title>cloud-688db9219da165ffad4e439d for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688db9219da165ffad4e439d</strong></h1>
cloud@dev:~$ curl -s https://web-dev.pritunl.com/ | grep cloud
	<title>cloud-688dbc759da165ffad4e4ab1 for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688dbc759da165ffad4e4ab1</strong></h1>
cloud@dev:~$ curl -s https://web-dev.pritunl.com/ | grep cloud
	<title>cloud-688db9219da165ffad4e439d for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688db9219da165ffad4e439d</strong></h1>
cloud@dev:~$ curl -s https://web-dev.pritunl.com/ | grep cloud
	<title>cloud-688dbc759da165ffad4e4ab1 for the HTTP Server on Oracle Linux</title>
	<h1>Oracle Linux <strong>cloud-688dbc759da165ffad4e4ab1</strong></h1>
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.pritunl.com/kb/cloud/getting-started/simple-web-server-pod.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
