Skip to content

Infrastructure Inventory

River Oaks Church - Created April 2026

This page is a single-source-of-truth inventory of the on-prem and adjacent infrastructure that makes River Oaks technology run. It was assembled by walking the riveroaks/infra and riveroaks/portal repos and stitching configs, READMEs, and existing pages on this docs site together. Where a detail wasn't determinable from source, it is marked (unknown — confirm) in the Open Questions appendix.

This page is internal-only and lives behind the docs site's auth. Nothing on this page should be treated as a security boundary on its own.


Overview

  • Single Proxmox server at the River Oaks main campus (no cluster).
  • 40 logical CPU cores, 49.43 TiB storage, 13 VMs online, 0 stopped, 3 LXC containers. Utilization (point-in-time, 2026-04-27): 68% RAM, 19% CPU, 44% storage.
  • Internet: 1 Gbps fiber primary, 300 / 40 Mbps cable backup.
  • Network fabric: 10G backbone, Ubiquiti end-to-end ("pretty large" network for the site size). Plus a handful of Raspberry Pis on one-off tasks.
  • Cross-site connectivity to Cole's datacenter and home runs over a WireGuard mesh with FRR-OSPF routing terminated at a dedicated VPN-Node VM at the church. Some River-Oaks-owned VMs sit at the Beam Networks datacenter and ride that mesh in.
  • Public ingress: primarily nginx on rocc-db for *.ro.church apps, plus Cloudflare (Pages, Tunnels, Access) for some surfaces.
  • Backups: three-tier — local (Proxmox host), NAS (gofer2), and weekly offsite to a Google Team Drive via rclone running in LXC 109. See VM Backups for the full schedule.

The server itself is referred to as rocc01 in older notes (see Inventory home). The Proxmox web UI is at https://proxmox.1nine89.net (per the docs home page).


VMs and Containers

The list below is what is identifiable from repo source as of 2026-04-27. The Proxmox host reports 13 VMs + 3 LXCs online, so several VMs / containers exist that aren't fingerprinted in riveroaks/infra yet. Those are noted as (unknown — confirm) in Open Questions.

rocc-db — webserver / DB / Node-RED / portal host

The single most loaded VM. Everything *.ro.church external lands here.

Field Value
Hostname rocc-db
IP 10.200.24.42
Proxmox VMID 100 (per backup policy in VM Backups)
OS Ubuntu (PHP 8.1, nginx, MySQL/MariaDB)
Role Public web tier + MySQL + Node-RED + portal stack
Public surface portal.ro.church, forms.ro.church, lyrics.ro.church, dev.portal.ro.church, ro.church (short links), *.gs.ro.church (Goshen wildcard), links.gs.ro.church, eugenecarol.com, forms.eugenecarol.com
Internal-only listeners nginx :1881 (Node-RED proxy), nginx :8080 (phpMyAdmin), nginx :8020 localhost-only (UniFi API helper), nginx :8123 TCP stream → 10.0.50.13:8123 (MQTT broker)
Services on host nginx, php8.1-fpm, MySQL/MariaDB, Node-RED (PM2), local certbot, GitLab CI runner ro-portal-1
Auto-deploy nginx is wired through infra repo's deploy:nginx:rocc-db GitLab job (manual on main); portal deploys via riveroaks/portal CI on push to main / dev
Owner Cole (primary)

Notes pulled from source:

  • The repo riveroaks/infra/nginx/hosts/rocc-db/ is a one-to-one mirror of /etc/nginx/ on this VM. Top-level nginx.conf defines a map $host $backend_endpoint for *.gs.ro.church proxying and a stream listener on :8123.
  • The Node-RED runtime at ~/.node-red/ (PM2 app node-red) listens on :1880; nginx proxies :1881 to localhost:1880. Custom palette has 23 node-red-contrib-* modules (see infra/nodered/instances/rocc-db-local/package.json).
  • The portal mounts at /usr/share/nginx/portal/{dashboard,forms,lyrics,errors,unifiapi}/. The dev portal mounts at /usr/share/nginx/html/dev-portal/dashboard/.
  • TLS: wildcard cert for gs.ro.church lives at /home/riveroaks/local-certbot/certbot/conf/live/gs.ro.church/. The *.ro.church chain is presumed managed elsewhere (unknown — confirm).

Reload: sudo nginx -t && sudo systemctl reload nginx. Node-RED: pm2 restart node-red (run via the user nvm PATH).

bitwarden — Bitwarden / Vaultwarden VM

Field Value
Hostname (unknown — confirm; called "Bitwarden" in backup policy)
Proxmox VMID 110 (high-frequency backup tier per VM Backups)
Role Self-hosted password manager for IT credentials
Public surface (unknown — confirm) — referenced as the credential store of record across the IT docs
Owner Cole (primary)

storage-mgr — backup orchestrator (LXC 109)

Field Value
Container ID 109 (LXC)
Role Drives nightly rclone push of Proxmox backups to Google Team Drive
Mounts /mnt/proxmox-backups (read-only bind from host's /var/lib/vz/dump)
Cron 0 0 * * * /root/proxmox-backup-sync.sh — weekly snapshot, 13-week retention
Excluded from local backup jobs yes, to avoid recursion

Documented at VM Backups.

ro-ntfy — Ntfy notifications gateway

Field Value
Hostname ro-ntfy (called out by name in Ntfy Account Setup)
IP (unknown — confirm)
Role Alert/notification fan-out for IT
Public surface notifications.ro.church

Per IT Notifications, the Ntfy service is described as currently hosted at the datacenter. The user-add procedure (Ntfy Account Setup) tells operators to log into Proxmox and find a VM named ro-ntfy, which suggests an on-prem VM exists — (confirm: is ro-ntfy on-prem at RO, at the datacenter, or both?).

billionmail — MailerSend dynamic-template substitute

Field Value
Hostname (unknown — confirm; service name "BillionMail")
IP 10.141.74.109
Role Self-hosted dynamic email templating; receives traffic from db-nodered
Where it actually runs today datacenter (per Goodbye SendGrid, with a stated plan to move it to the church)

Lives on the same VLAN as the rest of the RO VMs at the datacenter, so RO can reach it via the OSPF mesh through the static route to the datacenter. Listed here because it is a River-Oaks-owned VM even though it is currently parked at the BN datacenter.

Goshen-campus Node-RED VMs (mpr-nr, youth-nr, tech-nr)

Three Node-RED runtimes referenced by the map $host $backend_endpoint block in rocc-db's nginx.conf:

Hostname (vhost) Backend
mpr-nr.gs.ro.church 172.16.64.24:1880
youth-nr.gs.ro.church 172.16.64.25:1880
tech-nr.gs.ro.church 172.16.64.26:1880

All three are reverse-proxied via the *.gs.ro.church wildcard vhost on rocc-db, with WebSocket upgrade headers preserved. They sit on the Goshen campus VLAN (172.16.64.0/24). Whether each is its own VM, its own LXC, or all on one Proxmox host at the Goshen campus is (unknown — confirm).

This is consistent with the Node-RED topology shared-memory note that River Oaks runs 5–10 separate Node-RED instances; these three are explicit, plus rocc-db-local, plus db.nodered.ro.church (see below) puts the visible count at five.

db-nodered / edit.db.nodered.ro.church — primary RO Node-RED

Field Value
Hostname / FQDN edit.db.nodered.ro.church (editor); db.nodered.ro.church (logical)
Where it runs (unknown — confirm)not the rocc-db-local instance based on naming, and not in the infra repo yet
Role Webhook target for the portal — login events, WiFi signups, key fob requests, account creation, email send-out
Talked to by riveroaks/portal (per portal README.md); RO MailerSend / BillionMail flow
Owner (unknown — confirm; presumed Cole)

The portal's README.md calls out this Node-RED endpoint by FQDN as the webhook receiver for the WiFi onboarding flow and the login event log. Distinct from rocc-db-local (port 1881 on rocc-db itself), so this is at minimum a fourth RO Node-RED instance.

links.gs.ro.church backend

Field Value
Hostname (unknown — confirm)
IP / Port 10.200.24.36:3000 (per nginx vhost in infra/nginx/hosts/rocc-db/sites-enabled/gs.ro.church.conf)
Role Goshen short-link service
Public surface links.gs.ro.church

10.200.24.36 is one octet off from rocc-db (.42), suggesting a sibling VM in the same 10.200.24.0/24 subnet. Possibly the YOURLS or similar URL-shortener stack.

Cluster of unidentified VMs in 10.200.24.0/24 and 10.200.25.0/24

Two additional surfaces visible from configs but not yet documented:

  • 10.0.50.13:8123 — TCP stream proxy target from rocc-db's nginx (stream { server { listen 8123; proxy_pass 10.0.50.13:8123 } }). Cross-referenced as the MQTT broker address used by Node-RED flows (per infra/nodered/instances/rocc-db-local/flows.json line 643). (confirm host — could be at the datacenter on the OSPF mesh given the 10.0.x.x prefix, or could be a homelab address routed in.)
  • 10.200.25.155 and 10.200.25.156 on TCP 12445 — Two-NVR UniFi Access cluster ("stacked NVRs" per the Door Access home page). Hardware appliances, not Proxmox VMs. Termination for door-control and key-fob enforcement.

Cluster of Datacenter-resident RO VMs

Several of the IPs that show up in portal source live in the 10.141.x.x and 172.19.x.x ranges, which belong to RO-owned VMs hosted on the BN datacenter cluster (the trust-boundary exception called out in Team/_SharedMemory/reference_infrastructure_topology_and_capacity.md). The ones identified by name:

Service Address Identified from
BillionMail 10.141.74.109 Goodbye SendGrid
RAD1 / RAD2 / RAD3 (FreeRADIUS) (unknown — confirm exact IPs; on the 172.19.x.x range based on portal log queries) RADIUS Troubleshooting — "hosted on 3 virtual machines on my (Cole's) servers at the datacenter"

These are documented under the River Oaks side because they're RO-owned even though they're physically at the BN datacenter — this matches the trust-boundary policy.

Other Proxmox guests on the on-prem server (unidentified)

The Proxmox host reports 13 VMs + 3 LXCs total. The list above fingerprints roughly 4–5 on-prem VMs (rocc-db, bitwarden, ro-ntfy if on-prem, possibly the links backend at 10.200.24.36, possibly more) plus 1 LXC (storage-mgr 109). That leaves a sizable gap. See Open Questions for the list of unfingerprinted guests we'll need to confirm with Cole.


Services not on a dedicated VM

MySQL / MariaDB

Lives on rocc-db (localhost) and is the database for the entire portal monorepo. Three databases are in active use:

  • public — primary portal database (users, key fobs, networks, logs, vars).
  • tickets — IT/maintenance tickets.
  • expense_forms — March 2026 expense-request app, plus line items, attachments, and audit events.

A fourth database (logs) is referenced in dashboard/static/config.php.

Connection details are loaded from dashboard/static/config.php and the root .env (for the expense flow). MySQL is localhost-only on rocc-db; no external listener.

This is the RO MySQL host. It is separate from Beam's au-db.cg-e.net:6033 instance — see Team/_SharedMemory/reference_mysql_hosts.md. RO never connects to au-db; Beam never connects to this one.

Node-RED runtimes (RO side)

At least five identifiable instances; shared-memory Node-RED topology note says 5–10 total.

Instance Where Notes
rocc-db-local rocc-db (10.200.24.42:1880, proxied via nginx :1881) PM2-managed; flows tracked in infra/nodered/instances/rocc-db-local/. Has adminAuth enabled.
mpr-nr 172.16.64.24:1880 (Goshen) Multipurpose-room flows (presumed).
youth-nr 172.16.64.25:1880 (Goshen) Youth flows.
tech-nr 172.16.64.26:1880 (Goshen) Tech-team flows.
db.nodered.ro.church (edit.db.nodered.ro.church) (unknown host) Primary webhook receiver for the portal. Distinct from rocc-db-local.

Five identified; 0–5 more (unknown — confirm).

rocc-db-local's flow file references the MQTT broker at 10.0.50.13:8123 and an external auth-event SQL flow that writes to a nr-user-access table. Several flows POST to UniFi Access NVRs at 10.200.25.155 / 10.200.25.156.

Snipe-IT (asset tracking)

Field Value
Public surface inventory.ro.church
Where "Snipe-IT is ran in a Docker hosting software called Cloudron. The Cloudron VM is ran on rocc01." (per Inventory home)
Owner Cole / RO IT

This is a separate Snipe-IT instance from Beam Networks' Snipe-IT (per Team/_SharedMemory/reference_snipe_it_inventory.md). Data does not federate.

Documentation site (this site)

Field Value
Public surface docs.ro.church (auth-protected)
Build MkDocs (Material theme) — mkdocs build && wrangler pages deploy site/
Hosting Cloudflare Pages (project ro-docs) — see riveroaks/docs/.gitlab-ci.yml
CI runner ro-docker-1
Repo git-local.beamnetworks.cloud:riveroaks/docs.git, branch main

Note: the page Documentation Site Hosting currently says the site runs on Cole's Proxmox cluster at the datacenter — that is out of date as of 2026-04. The CI definition deploys to Cloudflare Pages now. Worth a follow-up edit when convenient.

UniFi Network Controller

Field Value
Internal endpoint https://10.100.1.58 (per portal README.md, "External Integrations")
Where UDM / NVR appliance, not a VM
Role Manages WiFi WLANs and access groups on the campus network
Used by portal's UniFi API client for WiFi WLAN management

UniFi Access (door / key fob)

Field Value
Internal endpoints https://10.200.25.155:12445, https://10.200.25.156:12445
Where "stacked NVRs" — hardware appliances at the church
Role Door access control + key-fob enforcement
Used by portal key-fob flow + Node-RED flows that grant/revoke user-group membership

UISP (PTP wireless)

Field Value
Public surface uisp.beamnetworks.dev
Where Hosted at Beam Networks' datacenter, not at RO (per UISP Hosting)
Role Manage Ubiquiti PTP wireless links between buildings

Out of scope for this RO inventory but documented here because the existing docs page already calls it out.

ProPresenter

Field Value
Internal endpoint http://10.200.5.5:1025/v1/
Where Sanctuary control PC (not on Proxmox)
Role Source of slide / lyric data for lyrics.ro.church
Polled by lyrics/poll_propresenter.php on rocc-db (every 50 ms; cached to /dev/shm)

Status monitoring (Uptime Kuma)

Field Value
Public surface status.ro.church (editor: /dashboard; public: /status/network)
Where "Docker running on the Oracle VM in the cloud" (per Network Monitoring)
Role Pings VMs and devices to track uptime

Cloud-hosted (Oracle Free Tier), not on RO Proxmox.

Other off-prem RO surfaces

For completeness — these are accessible via the docs landing page and exist outside the RO server:

  • portal.ro.church / dev.portal.ro.church — same rocc-db VM; listed here because they're the most-used surface.
  • Cloudflare account (dash.cloudflare.com) — DNS, Pages hosting, Access. The docs site lives behind Cloudflare Access per the team charter context.
  • Jamf Cloud (riveroaks.jamfcloud.com) — Apple device management. SaaS, not on-prem.
  • Jamf Protect (riveroaks.protect.jamfcloud.com) — SaaS.
  • GitLab (gitlab.beam-hosting.net) — code repo, public fallback for the self-hosted instance at git-local.beamnetworks.cloud.
  • Proxmox UIsproxmox.1nine89.net (RO on-prem), px-prod.beam-hosting.net (Cole's datacenter where some RO VMs also live).

Network Fabric

IP plan (as inferred from configs)

Range Where Notes
10.200.5.0/24 RO main campus — sanctuary AV ProPresenter (10.200.5.5)
10.200.24.0/24 RO main campus — server VLAN rocc-db (.42); links backend (.36)
10.200.25.0/24 RO main campus — UniFi Access NVRs .155, .156
10.100.1.0/24 RO main campus — UniFi Network controller 10.100.1.58
172.16.64.0/24 Goshen campus VLAN UniFi Network at .10; Node-RED VMs at .24/.25/.26; client devices .131 example
10.0.50.0/24 (unknown — confirm) — possibly home or DC MQTT broker at 10.0.50.13:8123
10.141.70.0/24, 10.141.74.0/24 RO VMs at the BN datacenter BillionMail at 10.141.74.109
172.19.x.x RO VMs at the BN datacenter (RADIUS subnet, etc.) RAD1/RAD2/RAD3 likely live here

VLAN map (unknown — confirm)

We don't have a complete VLAN list in source. What we do know:

  • Goshen campus has its own VLAN that maps to the 172.16.64.0/24 IP space.
  • RO main campus has at minimum a server VLAN (10.200.24.x), an AV VLAN (10.200.5.x), an Access-NVR VLAN (10.200.25.x), and a UniFi Network management surface (10.100.1.x).
  • IoT network exists per RADIUS Troubleshooting (Type 2 MAC-based accounts).

OSPF / WireGuard mesh

  • A dedicated VPN-Node VM at RO terminates the WireGuard mesh and runs FRR speaking OSPF to peers at home and the BN datacenter.
  • Cross-site reachability is via 10.x.x.x addresses; no public DNS bounce required for internal traffic.
  • Trust boundary: River Oaks cannot initiate connections into Beam Networks infra. RO-owned VMs that physically live at the BN datacenter (BillionMail, RADIUS, etc.) are the exception — they're already inside the BN edge.
  • Beam → RO outbound is unrestricted.
  • (unknown — confirm: VPN-Node hostname / VMID at RO.)

ISPs and public IP

  • Primary ISP: 1 Gbps fiber (provider: (unknown — confirm; "Surf" appears in Goodbye SendGrid as a potential public-IP source)).
  • Backup ISP: 300 / 40 Mbps cable.
  • Public IP allocation: single publicly-routable IP per site per the portfolio infrastructure note. Specific addresses (unknown — confirm).
  • Both ISPs are wired into the FRR-OSPF ring.

Public ingress

  • nginx on rocc-db is the primary public ingress for *.ro.church and the Goshen-wildcard vhosts.
  • Cloudflare sits in front for DNS, WAF, and Access-protected surfaces (e.g., this docs site, the Proxmox UI gating).
  • Cloudflare Tunnels are used in addition to nginx, but secondary to nginx as the primary path.

Backbone

  • 10G SFP+ Ubiquiti backbone.
  • Switch model / firewall make (unknown — confirm at RO; OPNsense is the firewall at the BN datacenter, not necessarily here.)

Trust boundaries with Beam Networks

Pulled from Team/_SharedMemory/reference_infrastructure_topology_and_capacity.md:

  • RO can NOT initiate new connections into BN infra (firewall posture).
  • RO-owned VMs at the BN datacenter are exempt — they're already inside BN's edge. BillionMail, RAD1/2/3, possibly Ntfy fall in this bucket.
  • BN → RO is unrestricted.
  • Per-environment isolation rules:
  • RO Node-RED ↔ BN Node-RED: never crosses. Five+ instances on each side, completely separate (per reference_node_red_topology.md).
  • RO MySQL on rocc-db is never queried by BN-side code; BN's au-db.cg-e.net:6033 is never queried by RO code (per reference_mysql_hosts.md).
  • RO Snipe-IT and BN Snipe-IT do not federate.

Open Questions

The following items are (unknown — confirm with Cole) and would close the gaps in this inventory. None of them are blocking; this is the backlog for a follow-up pass once a human can answer.

Per-VM gaps

  1. The remaining 8–9 on-prem VMs that the Proxmox host reports as "online" but that aren't fingerprinted in riveroaks/infra source. Need: hostname, VMID, IP, role, owner. Likely candidates we've heard of but can't pin down: a media server, a local file/SMB share, a cyd host (per shared memory), maybe a print server.
  2. The remaining 2 LXC containers (1 of 3 is storage-mgr 109). What runs in 100-class and 110-class containers we haven't named? Or are those two unused?
  3. bitwarden VM — hostname, IP, public surface (if any). VMID is 110.
  4. ro-ntfy VM — confirmed location: on-prem RO, at BN datacenter, or both? IT docs hint at both at different times.
  5. db.nodered.ro.church (edit.db.nodered.ro.church) — which host runs it? Is it a separate VM or another runtime on rocc-db?
  6. links.gs.ro.church backend at 10.200.24.36:3000 — what's the actual VM name and what software runs on it (YOURLS? custom?).
  7. 10.0.50.13 MQTT broker — host name, location (RO on-prem? home? DC?), what subscribes to it.

Network-fabric gaps

  1. Full VLAN list for the RO main campus — IDs, names, IP ranges, which are routed vs. isolated.
  2. Public IP addresses (primary + backup ISP) for the RO site.
  3. Primary fiber ISP name (Surf? something else?).
  4. VPN-Node VM hostname/VMID at RO and the WireGuard peer inventory it terminates.
  5. TLS chain for *.ro.church — is it ACME on rocc-db, ACME at the edge in Cloudflare, or a wildcard issued elsewhere?
  6. Firewall make / model for the RO main campus (is it OPNsense, Ubiquiti UDM Pro, something else?).

Datacenter-side RO VMs

  1. Exact IPs / hostnames for RAD1, RAD2, RAD3 (FreeRADIUS) at the BN datacenter — currently only "in 172.19.x.x" inferred.
  2. Other RO-owned VMs at the BN datacenter beyond BillionMail and the RADIUS three — if any.

Service-level gaps

  1. Does rocc-db MySQL replicate anywhere? Or is it single-instance with backups only?
  2. ProPresenter VM — is 10.200.5.5 a Proxmox VM, a physical Mac/PC in the sanctuary, or both?
  3. Lyrics WebSocket process — currently described in portal docs as "must be started as a background process separately." Is it a systemd unit, a tmux session, a PM2 app, or something else? On rocc-db?
  4. Update docs-hosting.md — currently says the docs site runs on Proxmox at the datacenter. Reality (per .gitlab-ci.yml) is Cloudflare Pages. Worth a small PR.

Process-level

  1. Is there a documented mapping between gs.ro.church and the Goshen campus? This page assumes Goshen, but no docs site page explicitly says so.
  2. What runs on the Raspberry Pis "doing one-off tasks" at RO? Worth a short paragraph each if any of them serves traffic.

Appendix: source-file index

Files used to build this inventory (read-only). Listed for traceability so the next pass can regenerate cleanly.

From riveroaks/infra/:

  • README.md, roadmap.md, user-actions.md
  • nginx/hosts/rocc-db/README.md
  • nginx/hosts/rocc-db/nginx.conf
  • nginx/hosts/rocc-db/sites-enabled/*.conf (7 files)
  • nginx/hosts/rocc-db/sites-available/default
  • nodered/instances/rocc-db-local/README.md
  • nodered/instances/rocc-db-local/package.json
  • nodered/instances/rocc-db-local/flows.json (name only — contents contain auth tokens; not parsed in this inventory)
  • deploy/targets/nginx/rocc-db.json
  • deploy/targets/nodered/rocc-db-local.json
  • docs/inventory/nginx-hosts.md
  • docs/inventory/nodered-instances.md
  • docs/deploy/certificates.md
  • docs/deploy/gitlab-ci.md
  • .gitlab-ci.yml

From riveroaks/portal/:

  • README.md
  • dashboard/static/config.php (IP/host references only — no secrets read)
  • dashboard/v1.1/keys/config/unifi_api_config.php (IP/host references only)
  • dashboard/v1.1/keys/config/unifi_door_logs.php (IP/host references only)
  • dashboard/v1.1/tickets/trigger_email_service.php (IP/host references only)
  • lyrics/poll_propresenter.php (IP/host references only)
  • .env.example (template only)

From this docs site (riveroaks/docs/docs/):

  • index.md, IT/.nav.yml
  • IT/vm-backups.md, IT/vm-mainteance.md, IT/docs-hosting.md
  • IT/Inventory/home.md
  • IT/Networking/radius-troubleshooting.md
  • IT/Networking/status-monitoring.md
  • IT/Networking/uisp-manager.md
  • IT/Networking/ntfy-setup.md
  • IT/Networking/goodbye-sendgrid.md
  • IT/Networking/Doors/home.md
  • IT/Onboarding/notifications.md

From shared team memory (Team/_SharedMemory/):

  • reference_infrastructure_topology_and_capacity.md
  • reference_node_red_topology.md
  • reference_mysql_hosts.md
  • reference_snipe_it_inventory.md

No .env, .secret, credential, or private-key file was opened in producing this inventory.