Infrastructure Inventory
River Oaks Church - Created April 2026
This page is a single-source-of-truth inventory of the on-prem and adjacent
infrastructure that makes River Oaks technology run. It was assembled by
walking the riveroaks/infra and riveroaks/portal repos and stitching
configs, READMEs, and existing pages on this docs site together. Where a
detail wasn't determinable from source, it is marked
(unknown — confirm) in the Open Questions appendix.
This page is internal-only and lives behind the docs site's auth. Nothing on this page should be treated as a security boundary on its own.
Overview
- Single Proxmox server at the River Oaks main campus (no cluster).
- 40 logical CPU cores, 49.43 TiB storage, 13 VMs online, 0 stopped, 3 LXC containers. Utilization (point-in-time, 2026-04-27): 68% RAM, 19% CPU, 44% storage.
- Internet: 1 Gbps fiber primary, 300 / 40 Mbps cable backup.
- Network fabric: 10G backbone, Ubiquiti end-to-end ("pretty large" network for the site size). Plus a handful of Raspberry Pis on one-off tasks.
- Cross-site connectivity to Cole's datacenter and home runs over a WireGuard mesh with FRR-OSPF routing terminated at a dedicated VPN-Node VM at the church. Some River-Oaks-owned VMs sit at the Beam Networks datacenter and ride that mesh in.
- Public ingress: primarily nginx on
rocc-dbfor*.ro.churchapps, plus Cloudflare (Pages, Tunnels, Access) for some surfaces. - Backups: three-tier — local (Proxmox host), NAS (
gofer2), and weekly offsite to a Google Team Drive viarclonerunning in LXC 109. See VM Backups for the full schedule.
The server itself is referred to as rocc01 in older notes
(see Inventory home). The Proxmox web UI is at
https://proxmox.1nine89.net (per the
docs home page).
VMs and Containers
The list below is what is identifiable from repo source as of 2026-04-27.
The Proxmox host reports 13 VMs + 3 LXCs online, so several VMs / containers
exist that aren't fingerprinted in riveroaks/infra yet. Those are noted as
(unknown — confirm) in Open Questions.
rocc-db — webserver / DB / Node-RED / portal host
The single most loaded VM. Everything *.ro.church external lands here.
| Field | Value |
|---|---|
| Hostname | rocc-db |
| IP | 10.200.24.42 |
| Proxmox VMID | 100 (per backup policy in VM Backups) |
| OS | Ubuntu (PHP 8.1, nginx, MySQL/MariaDB) |
| Role | Public web tier + MySQL + Node-RED + portal stack |
| Public surface | portal.ro.church, forms.ro.church, lyrics.ro.church, dev.portal.ro.church, ro.church (short links), *.gs.ro.church (Goshen wildcard), links.gs.ro.church, eugenecarol.com, forms.eugenecarol.com |
| Internal-only listeners | nginx :1881 (Node-RED proxy), nginx :8080 (phpMyAdmin), nginx :8020 localhost-only (UniFi API helper), nginx :8123 TCP stream → 10.0.50.13:8123 (MQTT broker) |
| Services on host | nginx, php8.1-fpm, MySQL/MariaDB, Node-RED (PM2), local certbot, GitLab CI runner ro-portal-1 |
| Auto-deploy | nginx is wired through infra repo's deploy:nginx:rocc-db GitLab job (manual on main); portal deploys via riveroaks/portal CI on push to main / dev |
| Owner | Cole (primary) |
Notes pulled from source:
- The repo
riveroaks/infra/nginx/hosts/rocc-db/is a one-to-one mirror of/etc/nginx/on this VM. Top-levelnginx.confdefines amap $host $backend_endpointfor*.gs.ro.churchproxying and astreamlistener on:8123. - The Node-RED runtime at
~/.node-red/(PM2 appnode-red) listens on:1880; nginx proxies:1881tolocalhost:1880. Custom palette has 23node-red-contrib-*modules (seeinfra/nodered/instances/rocc-db-local/package.json). - The portal mounts at
/usr/share/nginx/portal/{dashboard,forms,lyrics,errors,unifiapi}/. The dev portal mounts at/usr/share/nginx/html/dev-portal/dashboard/. - TLS: wildcard cert for
gs.ro.churchlives at/home/riveroaks/local-certbot/certbot/conf/live/gs.ro.church/. The*.ro.churchchain is presumed managed elsewhere (unknown — confirm).
Reload: sudo nginx -t && sudo systemctl reload nginx. Node-RED:
pm2 restart node-red (run via the user nvm PATH).
bitwarden — Bitwarden / Vaultwarden VM
| Field | Value |
|---|---|
| Hostname | (unknown — confirm; called "Bitwarden" in backup policy) |
| Proxmox VMID | 110 (high-frequency backup tier per VM Backups) |
| Role | Self-hosted password manager for IT credentials |
| Public surface | (unknown — confirm) — referenced as the credential store of record across the IT docs |
| Owner | Cole (primary) |
storage-mgr — backup orchestrator (LXC 109)
| Field | Value |
|---|---|
| Container ID | 109 (LXC) |
| Role | Drives nightly rclone push of Proxmox backups to Google Team Drive |
| Mounts | /mnt/proxmox-backups (read-only bind from host's /var/lib/vz/dump) |
| Cron | 0 0 * * * /root/proxmox-backup-sync.sh — weekly snapshot, 13-week retention |
| Excluded from local backup jobs | yes, to avoid recursion |
Documented at VM Backups.
ro-ntfy — Ntfy notifications gateway
| Field | Value |
|---|---|
| Hostname | ro-ntfy (called out by name in Ntfy Account Setup) |
| IP | (unknown — confirm) |
| Role | Alert/notification fan-out for IT |
| Public surface | notifications.ro.church |
Per IT Notifications, the Ntfy service is
described as currently hosted at the datacenter. The user-add procedure
(Ntfy Account Setup) tells operators to log
into Proxmox and find a VM named ro-ntfy, which suggests an on-prem VM
exists — (confirm: is ro-ntfy on-prem at RO, at the datacenter, or
both?).
billionmail — MailerSend dynamic-template substitute
| Field | Value |
|---|---|
| Hostname | (unknown — confirm; service name "BillionMail") |
| IP | 10.141.74.109 |
| Role | Self-hosted dynamic email templating; receives traffic from db-nodered |
| Where it actually runs today | datacenter (per Goodbye SendGrid, with a stated plan to move it to the church) |
Lives on the same VLAN as the rest of the RO VMs at the datacenter, so RO can reach it via the OSPF mesh through the static route to the datacenter. Listed here because it is a River-Oaks-owned VM even though it is currently parked at the BN datacenter.
Goshen-campus Node-RED VMs (mpr-nr, youth-nr, tech-nr)
Three Node-RED runtimes referenced by the map $host $backend_endpoint
block in rocc-db's nginx.conf:
| Hostname (vhost) | Backend |
|---|---|
mpr-nr.gs.ro.church |
172.16.64.24:1880 |
youth-nr.gs.ro.church |
172.16.64.25:1880 |
tech-nr.gs.ro.church |
172.16.64.26:1880 |
All three are reverse-proxied via the *.gs.ro.church wildcard vhost
on rocc-db, with WebSocket upgrade headers preserved. They sit on the
Goshen campus VLAN (172.16.64.0/24). Whether each is its own VM,
its own LXC, or all on one Proxmox host at the Goshen campus is
(unknown — confirm).
This is consistent with the Node-RED topology shared-memory note
that River Oaks runs 5–10 separate Node-RED instances; these three
are explicit, plus rocc-db-local, plus db.nodered.ro.church
(see below) puts the visible count at five.
db-nodered / edit.db.nodered.ro.church — primary RO Node-RED
| Field | Value |
|---|---|
| Hostname / FQDN | edit.db.nodered.ro.church (editor); db.nodered.ro.church (logical) |
| Where it runs | (unknown — confirm) — not the rocc-db-local instance based on naming, and not in the infra repo yet |
| Role | Webhook target for the portal — login events, WiFi signups, key fob requests, account creation, email send-out |
| Talked to by | riveroaks/portal (per portal README.md); RO MailerSend / BillionMail flow |
| Owner | (unknown — confirm; presumed Cole) |
The portal's README.md calls out this Node-RED endpoint by FQDN as
the webhook receiver for the WiFi onboarding flow and the login event
log. Distinct from rocc-db-local (port 1881 on rocc-db itself), so
this is at minimum a fourth RO Node-RED instance.
links.gs.ro.church backend
| Field | Value |
|---|---|
| Hostname | (unknown — confirm) |
| IP / Port | 10.200.24.36:3000 (per nginx vhost in infra/nginx/hosts/rocc-db/sites-enabled/gs.ro.church.conf) |
| Role | Goshen short-link service |
| Public surface | links.gs.ro.church |
10.200.24.36 is one octet off from rocc-db (.42), suggesting a
sibling VM in the same 10.200.24.0/24 subnet. Possibly the
YOURLS or similar URL-shortener stack.
Cluster of unidentified VMs in 10.200.24.0/24 and 10.200.25.0/24
Two additional surfaces visible from configs but not yet documented:
10.0.50.13:8123— TCP stream proxy target fromrocc-db's nginx (stream { server { listen 8123; proxy_pass 10.0.50.13:8123 } }). Cross-referenced as the MQTT broker address used by Node-RED flows (perinfra/nodered/instances/rocc-db-local/flows.jsonline 643). (confirm host — could be at the datacenter on the OSPF mesh given the10.0.x.xprefix, or could be a homelab address routed in.)10.200.25.155and10.200.25.156on TCP12445— Two-NVR UniFi Access cluster ("stacked NVRs" per the Door Access home page). Hardware appliances, not Proxmox VMs. Termination for door-control and key-fob enforcement.
Cluster of Datacenter-resident RO VMs
Several of the IPs that show up in portal source live in the
10.141.x.x and 172.19.x.x ranges, which belong to RO-owned
VMs hosted on the BN datacenter cluster (the trust-boundary exception
called out in
Team/_SharedMemory/reference_infrastructure_topology_and_capacity.md).
The ones identified by name:
| Service | Address | Identified from |
|---|---|---|
| BillionMail | 10.141.74.109 |
Goodbye SendGrid |
| RAD1 / RAD2 / RAD3 (FreeRADIUS) | (unknown — confirm exact IPs; on the 172.19.x.x range based on portal log queries) |
RADIUS Troubleshooting — "hosted on 3 virtual machines on my (Cole's) servers at the datacenter" |
These are documented under the River Oaks side because they're RO-owned even though they're physically at the BN datacenter — this matches the trust-boundary policy.
Other Proxmox guests on the on-prem server (unidentified)
The Proxmox host reports 13 VMs + 3 LXCs total. The list above
fingerprints roughly 4–5 on-prem VMs (rocc-db, bitwarden, ro-ntfy
if on-prem, possibly the links backend at 10.200.24.36, possibly more)
plus 1 LXC (storage-mgr 109). That leaves a sizable gap. See
Open Questions for the list of unfingerprinted
guests we'll need to confirm with Cole.
Services not on a dedicated VM
MySQL / MariaDB
Lives on rocc-db (localhost) and is the database for the entire
portal monorepo. Three databases are in active use:
public— primary portal database (users, key fobs, networks, logs, vars).tickets— IT/maintenance tickets.expense_forms— March 2026 expense-request app, plus line items, attachments, and audit events.
A fourth database (logs) is referenced in dashboard/static/config.php.
Connection details are loaded from dashboard/static/config.php and
the root .env (for the expense flow). MySQL is localhost-only on
rocc-db; no external listener.
This is the RO MySQL host. It is separate from Beam's
au-db.cg-e.net:6033 instance — see
Team/_SharedMemory/reference_mysql_hosts.md. RO never connects to
au-db; Beam never connects to this one.
Node-RED runtimes (RO side)
At least five identifiable instances; shared-memory Node-RED topology note says 5–10 total.
| Instance | Where | Notes |
|---|---|---|
rocc-db-local |
rocc-db (10.200.24.42:1880, proxied via nginx :1881) |
PM2-managed; flows tracked in infra/nodered/instances/rocc-db-local/. Has adminAuth enabled. |
mpr-nr |
172.16.64.24:1880 (Goshen) |
Multipurpose-room flows (presumed). |
youth-nr |
172.16.64.25:1880 (Goshen) |
Youth flows. |
tech-nr |
172.16.64.26:1880 (Goshen) |
Tech-team flows. |
db.nodered.ro.church (edit.db.nodered.ro.church) |
(unknown host) | Primary webhook receiver for the portal. Distinct from rocc-db-local. |
Five identified; 0–5 more (unknown — confirm).
rocc-db-local's flow file references the MQTT broker at
10.0.50.13:8123 and an external auth-event SQL flow that writes to a
nr-user-access table. Several flows POST to UniFi Access NVRs at
10.200.25.155 / 10.200.25.156.
Snipe-IT (asset tracking)
| Field | Value |
|---|---|
| Public surface | inventory.ro.church |
| Where | "Snipe-IT is ran in a Docker hosting software called Cloudron. The Cloudron VM is ran on rocc01." (per Inventory home) |
| Owner | Cole / RO IT |
This is a separate Snipe-IT instance from Beam Networks' Snipe-IT
(per Team/_SharedMemory/reference_snipe_it_inventory.md). Data does
not federate.
Documentation site (this site)
| Field | Value |
|---|---|
| Public surface | docs.ro.church (auth-protected) |
| Build | MkDocs (Material theme) — mkdocs build && wrangler pages deploy site/ |
| Hosting | Cloudflare Pages (project ro-docs) — see riveroaks/docs/.gitlab-ci.yml |
| CI runner | ro-docker-1 |
| Repo | git-local.beamnetworks.cloud:riveroaks/docs.git, branch main |
Note: the page Documentation Site Hosting currently says the site runs on Cole's Proxmox cluster at the datacenter — that is out of date as of 2026-04. The CI definition deploys to Cloudflare Pages now. Worth a follow-up edit when convenient.
UniFi Network Controller
| Field | Value |
|---|---|
| Internal endpoint | https://10.100.1.58 (per portal README.md, "External Integrations") |
| Where | UDM / NVR appliance, not a VM |
| Role | Manages WiFi WLANs and access groups on the campus network |
| Used by | portal's UniFi API client for WiFi WLAN management |
UniFi Access (door / key fob)
| Field | Value |
|---|---|
| Internal endpoints | https://10.200.25.155:12445, https://10.200.25.156:12445 |
| Where | "stacked NVRs" — hardware appliances at the church |
| Role | Door access control + key-fob enforcement |
| Used by | portal key-fob flow + Node-RED flows that grant/revoke user-group membership |
UISP (PTP wireless)
| Field | Value |
|---|---|
| Public surface | uisp.beamnetworks.dev |
| Where | Hosted at Beam Networks' datacenter, not at RO (per UISP Hosting) |
| Role | Manage Ubiquiti PTP wireless links between buildings |
Out of scope for this RO inventory but documented here because the existing docs page already calls it out.
ProPresenter
| Field | Value |
|---|---|
| Internal endpoint | http://10.200.5.5:1025/v1/ |
| Where | Sanctuary control PC (not on Proxmox) |
| Role | Source of slide / lyric data for lyrics.ro.church |
| Polled by | lyrics/poll_propresenter.php on rocc-db (every 50 ms; cached to /dev/shm) |
Status monitoring (Uptime Kuma)
| Field | Value |
|---|---|
| Public surface | status.ro.church (editor: /dashboard; public: /status/network) |
| Where | "Docker running on the Oracle VM in the cloud" (per Network Monitoring) |
| Role | Pings VMs and devices to track uptime |
Cloud-hosted (Oracle Free Tier), not on RO Proxmox.
Other off-prem RO surfaces
For completeness — these are accessible via the docs landing page and exist outside the RO server:
portal.ro.church/dev.portal.ro.church— samerocc-dbVM; listed here because they're the most-used surface.- Cloudflare account (
dash.cloudflare.com) — DNS, Pages hosting, Access. The docs site lives behind Cloudflare Access per the team charter context. - Jamf Cloud (
riveroaks.jamfcloud.com) — Apple device management. SaaS, not on-prem. - Jamf Protect (
riveroaks.protect.jamfcloud.com) — SaaS. - GitLab (
gitlab.beam-hosting.net) — code repo, public fallback for the self-hosted instance atgit-local.beamnetworks.cloud. - Proxmox UIs —
proxmox.1nine89.net(RO on-prem),px-prod.beam-hosting.net(Cole's datacenter where some RO VMs also live).
Network Fabric
IP plan (as inferred from configs)
| Range | Where | Notes |
|---|---|---|
10.200.5.0/24 |
RO main campus — sanctuary AV | ProPresenter (10.200.5.5) |
10.200.24.0/24 |
RO main campus — server VLAN | rocc-db (.42); links backend (.36) |
10.200.25.0/24 |
RO main campus — UniFi Access NVRs | .155, .156 |
10.100.1.0/24 |
RO main campus — UniFi Network controller | 10.100.1.58 |
172.16.64.0/24 |
Goshen campus VLAN | UniFi Network at .10; Node-RED VMs at .24/.25/.26; client devices .131 example |
10.0.50.0/24 |
(unknown — confirm) — possibly home or DC | MQTT broker at 10.0.50.13:8123 |
10.141.70.0/24, 10.141.74.0/24 |
RO VMs at the BN datacenter | BillionMail at 10.141.74.109 |
172.19.x.x |
RO VMs at the BN datacenter (RADIUS subnet, etc.) | RAD1/RAD2/RAD3 likely live here |
VLAN map (unknown — confirm)
We don't have a complete VLAN list in source. What we do know:
- Goshen campus has its own VLAN that maps to the
172.16.64.0/24IP space. - RO main campus has at minimum a server VLAN (
10.200.24.x), an AV VLAN (10.200.5.x), an Access-NVR VLAN (10.200.25.x), and a UniFi Network management surface (10.100.1.x). - IoT network exists per RADIUS Troubleshooting (Type 2 MAC-based accounts).
OSPF / WireGuard mesh
- A dedicated VPN-Node VM at RO terminates the WireGuard mesh and runs FRR speaking OSPF to peers at home and the BN datacenter.
- Cross-site reachability is via
10.x.x.xaddresses; no public DNS bounce required for internal traffic. - Trust boundary: River Oaks cannot initiate connections into Beam Networks infra. RO-owned VMs that physically live at the BN datacenter (BillionMail, RADIUS, etc.) are the exception — they're already inside the BN edge.
- Beam → RO outbound is unrestricted.
- (unknown — confirm: VPN-Node hostname / VMID at RO.)
ISPs and public IP
- Primary ISP: 1 Gbps fiber (provider: (unknown — confirm; "Surf" appears in Goodbye SendGrid as a potential public-IP source)).
- Backup ISP: 300 / 40 Mbps cable.
- Public IP allocation: single publicly-routable IP per site per the portfolio infrastructure note. Specific addresses (unknown — confirm).
- Both ISPs are wired into the FRR-OSPF ring.
Public ingress
- nginx on
rocc-dbis the primary public ingress for*.ro.churchand the Goshen-wildcard vhosts. - Cloudflare sits in front for DNS, WAF, and Access-protected surfaces (e.g., this docs site, the Proxmox UI gating).
- Cloudflare Tunnels are used in addition to nginx, but secondary to nginx as the primary path.
Backbone
- 10G SFP+ Ubiquiti backbone.
- Switch model / firewall make (unknown — confirm at RO; OPNsense is the firewall at the BN datacenter, not necessarily here.)
Trust boundaries with Beam Networks
Pulled from
Team/_SharedMemory/reference_infrastructure_topology_and_capacity.md:
- RO can NOT initiate new connections into BN infra (firewall posture).
- RO-owned VMs at the BN datacenter are exempt — they're already inside BN's edge. BillionMail, RAD1/2/3, possibly Ntfy fall in this bucket.
- BN → RO is unrestricted.
- Per-environment isolation rules:
- RO Node-RED ↔ BN Node-RED: never crosses. Five+ instances on
each side, completely separate (per
reference_node_red_topology.md). - RO MySQL on
rocc-dbis never queried by BN-side code; BN'sau-db.cg-e.net:6033is never queried by RO code (perreference_mysql_hosts.md). - RO Snipe-IT and BN Snipe-IT do not federate.
Open Questions
The following items are (unknown — confirm with Cole) and would close the gaps in this inventory. None of them are blocking; this is the backlog for a follow-up pass once a human can answer.
Per-VM gaps
- The remaining 8–9 on-prem VMs that the Proxmox host reports as
"online" but that aren't fingerprinted in
riveroaks/infrasource. Need: hostname, VMID, IP, role, owner. Likely candidates we've heard of but can't pin down: a media server, a local file/SMB share, acydhost (per shared memory), maybe a print server. - The remaining 2 LXC containers (1 of 3 is
storage-mgr109). What runs in 100-class and 110-class containers we haven't named? Or are those two unused? bitwardenVM — hostname, IP, public surface (if any). VMID is 110.ro-ntfyVM — confirmed location: on-prem RO, at BN datacenter, or both? IT docs hint at both at different times.db.nodered.ro.church(edit.db.nodered.ro.church) — which host runs it? Is it a separate VM or another runtime onrocc-db?links.gs.ro.churchbackend at10.200.24.36:3000— what's the actual VM name and what software runs on it (YOURLS? custom?).10.0.50.13MQTT broker — host name, location (RO on-prem? home? DC?), what subscribes to it.
Network-fabric gaps
- Full VLAN list for the RO main campus — IDs, names, IP ranges, which are routed vs. isolated.
- Public IP addresses (primary + backup ISP) for the RO site.
- Primary fiber ISP name (Surf? something else?).
- VPN-Node VM hostname/VMID at RO and the WireGuard peer inventory it terminates.
- TLS chain for
*.ro.church— is it ACME onrocc-db, ACME at the edge in Cloudflare, or a wildcard issued elsewhere? - Firewall make / model for the RO main campus (is it OPNsense, Ubiquiti UDM Pro, something else?).
Datacenter-side RO VMs
- Exact IPs / hostnames for RAD1, RAD2, RAD3 (FreeRADIUS) at the
BN datacenter — currently only "in
172.19.x.x" inferred. - Other RO-owned VMs at the BN datacenter beyond BillionMail and the RADIUS three — if any.
Service-level gaps
- Does
rocc-dbMySQL replicate anywhere? Or is it single-instance with backups only? - ProPresenter VM — is
10.200.5.5a Proxmox VM, a physical Mac/PC in the sanctuary, or both? - Lyrics WebSocket process — currently described in portal docs
as "must be started as a background process separately." Is it a
systemd unit, a tmux session, a PM2 app, or something else? On
rocc-db? - Update docs-hosting.md — currently says the
docs site runs on Proxmox at the datacenter. Reality (per
.gitlab-ci.yml) is Cloudflare Pages. Worth a small PR.
Process-level
- Is there a documented mapping between
gs.ro.churchand the Goshen campus? This page assumes Goshen, but no docs site page explicitly says so. - What runs on the Raspberry Pis "doing one-off tasks" at RO? Worth a short paragraph each if any of them serves traffic.
Appendix: source-file index
Files used to build this inventory (read-only). Listed for traceability so the next pass can regenerate cleanly.
From riveroaks/infra/:
README.md,roadmap.md,user-actions.mdnginx/hosts/rocc-db/README.mdnginx/hosts/rocc-db/nginx.confnginx/hosts/rocc-db/sites-enabled/*.conf(7 files)nginx/hosts/rocc-db/sites-available/defaultnodered/instances/rocc-db-local/README.mdnodered/instances/rocc-db-local/package.jsonnodered/instances/rocc-db-local/flows.json(name only — contents contain auth tokens; not parsed in this inventory)deploy/targets/nginx/rocc-db.jsondeploy/targets/nodered/rocc-db-local.jsondocs/inventory/nginx-hosts.mddocs/inventory/nodered-instances.mddocs/deploy/certificates.mddocs/deploy/gitlab-ci.md.gitlab-ci.yml
From riveroaks/portal/:
README.mddashboard/static/config.php(IP/host references only — no secrets read)dashboard/v1.1/keys/config/unifi_api_config.php(IP/host references only)dashboard/v1.1/keys/config/unifi_door_logs.php(IP/host references only)dashboard/v1.1/tickets/trigger_email_service.php(IP/host references only)lyrics/poll_propresenter.php(IP/host references only).env.example(template only)
From this docs site (riveroaks/docs/docs/):
index.md,IT/.nav.ymlIT/vm-backups.md,IT/vm-mainteance.md,IT/docs-hosting.mdIT/Inventory/home.mdIT/Networking/radius-troubleshooting.mdIT/Networking/status-monitoring.mdIT/Networking/uisp-manager.mdIT/Networking/ntfy-setup.mdIT/Networking/goodbye-sendgrid.mdIT/Networking/Doors/home.mdIT/Onboarding/notifications.md
From shared team memory (Team/_SharedMemory/):
reference_infrastructure_topology_and_capacity.mdreference_node_red_topology.mdreference_mysql_hosts.mdreference_snipe_it_inventory.md
No .env, .secret, credential, or private-key file was opened in
producing this inventory.