This guide is for customer engineers preparing to deploy a Solace appliance. It covers what you need to provide, decisions to make before installation, and what the finished product looks like.
Solace is a support appliance that runs as a VM inside your Proxmox cluster. It provides monitoring, diagnostics, and a secure channel for your support partner to assist you remotely.
The appliance connects outbound to a central headend — no inbound ports are opened on your network. Your support partner can only access your environment when you explicitly grant permission through the Solace web UI.
The Solace appliance runs as a standard Linux VM on your Proxmox cluster. Create a VM with the following specs:
| Spec | Minimum | Recommended |
|---|---|---|
| OS | Debian 13 (Trixie) or Ubuntu 22.04+ | |
| CPU | 2 cores | 4 cores |
| RAM | 4 GB | 8 GB |
| Disk | 40 GB | 80 GB |
| Network | Management VLAN (must reach PVE nodes on port 8006) | |
A standard Debian 13 cloud image or minimal install works well. No special kernel modules, host agents, or Proxmox-specific configuration is required.
The VM needs internet access for initial setup (package installation, Docker image pulls). After installation, it only needs outbound access to three specific endpoints (see Firewall & Network Rules).
This is the most important decision in the deployment. The domain name affects service URLs, DNS records, and SSL certificates. Changing it later requires reinstalling the appliance.
Solace uses a domain suffix for all service URLs. The main web UI is at solace.<domain>, and add-on services like Grafana are at grafana.<domain>, pulse.<domain>, etc.
You choose a domain suffix. Solace creates subdomains under it:
solace.<domain> and other service subdomains on your network. If the domain only exists in external DNS, your nodes won't be able to communicate with Solace for metrics or monitoring..local, .internal), SSL certificates cannot be issued and the appliance will use self-signed certificates (browser warnings).| Domain | Solace URL | Notes |
|---|---|---|
syd.acme.com.au | solace.syd.acme.com.au | Site-specific subdomain of a public domain. Ideal. |
pve-mgt.ot.acme.com.au | solace.pve-mgt.ot.acme.com.au | Longer but descriptive. Works well. |
oxf.fsa.ninja | solace.oxf.fsa.ninja | Short dedicated domain. Works well. |
| Mistake | Problem |
|---|---|
Setting domain to solace.acme.com.au | URL becomes solace.solace.acme.com.au — the system adds solace. automatically |
Using .local or .internal | Not publicly resolvable — SSL certificates cannot be issued |
| Using a domain you can't add DNS records for | Wildcard A record and ACME CNAME cannot be created |
| Using external-only DNS | PVE nodes can't resolve Solace hostnames internally — metrics and monitoring fail |
Not sure what to pick? Discuss this with your support partner before the install date. They can help you choose a domain that works for your environment.
Two DNS records are needed:
Create a wildcard A record pointing to the Solace VM's IP address:
This must be resolvable from your PVE nodes. If you use split-horizon DNS, ensure the internal view also resolves these records.
After installation, your support partner will provide a CNAME record to add for SSL certificate validation:
This is a one-time setup. Once added, SSL certificates are issued and renewed automatically.
If you can't add the CNAME record (e.g. internal-only domain), the appliance works fine with self-signed certificates. You'll see browser warnings, but functionality is unaffected.
No inbound ports required. Solace initiates all external connections outbound. No firewall rules need to be opened for incoming traffic from the internet.
| Destination | Port | Purpose |
|---|---|---|
solace.sol1.net | 443 (HTTPS) | Headend API, update checks, certificate delivery |
solace.sol1.net | 22 (SSH) | Reverse tunnel for remote support sessions |
gitlab.sol1.net | 5050 (HTTPS) | Docker image registry for appliance updates |
| Destination | Port | Purpose |
|---|---|---|
| PVE nodes | 8006 (HTTPS) | Proxmox VE API for monitoring and management |
| PVE nodes | 22 (SSH) | Automated patching — package upgrades via SSH (optional, see Automated Patching) |
| PBS servers (if any) | 8007 (HTTPS) | Proxmox Backup Server API for backup monitoring |
| vCenter / ESXi (if migrating) | 443 (HTTPS) | VMware API access for VM discovery and migration |
| Destination | Port | Purpose |
|---|---|---|
| Solace VM | 443 (HTTPS) | Web UI access for administrators |
| Solace VM | 8086 (HTTP) | InfluxDB metrics endpoint (PVE pushes metrics here) |
If your network routes internet traffic through a corporate HTTP proxy, the installer will detect it automatically or prompt for the proxy URL during installation. The proxy is used for outbound connections to the headend, Docker registry, and certificate provider.
The following traffic bypasses the proxy automatically:
If your vCenter or ESXi hosts use non-private IP addresses (e.g. public or carrier-grade NAT ranges), you may need to add them to the NO_PROXY list in /etc/solace/proxy.conf and restart the Docker containers. Your support partner can assist with this.
Your support partner will SSH into the VM and run an automated installer. The process takes approximately 10–15 minutes.
During installation, you'll be asked to choose a LUKS passphrase. This encrypts a secure storage volume on the VM that holds all secrets, certificates, and sensitive configuration.
The LUKS passphrase is irrecoverable. If lost, the entire appliance must be reinstalled from scratch. Store this passphrase in a secure password manager.
The installer will:
After installation, your support partner will configure the connection to the headend, add your Proxmox clusters, and install any requested app store services (Grafana, monitoring tools, etc.).
Once deployed, the Solace web UI is accessible at https://solace.<domain>. Here's what's included:
Live status of every PVE node — CPU, RAM, disk, uptime, and subscription status at a glance.
PBS datastore usage, backup job status, task summaries, and failure alerts.
Pre-configured dashboards for PVE metrics — historical CPU, memory, network, and storage trends.
Secure remote access for your support partner — you control when they connect and can disconnect at any time.
One-click health reports sent directly to your support partner — no SSH access required.
Install additional tools like Pulse (real-time monitoring), ProxLB (load balancing), and PDM (multi-cluster management).
See every pending update across all nodes with CVE tracking. Create patch plans to roll out updates node by node with zero-downtime VM management.
Install and track Proxmox VE and PBS license keys from the web UI.
Scan all clusters for stale VM snapshots. Flag old snapshots that waste storage and slow migrations.
Sync PVE inventory to NetBox — clusters, nodes, and VMs with review-before-push workflow.
Integrate with Active Directory, LDAP, or OIDC/SSO for single sign-on access.
SolaceSupport role with Sys.Audit, Sys.Modify, Sys.Syslog, SDN.Audit, Mapping.Modify, Datastore.Audit, VM.Audit, and VM.Migrate privileges. It cannot create, modify, or delete VMs.VM configurations, storage data, user credentials, and network details are never sent to the headend.
When the Solace VM boots (after initial installation, reboots, or power outages), it requires the LUKS passphrase to unlock the encrypted storage before services can start.
If the VM reboots unattended (e.g. after a power outage), Solace will not start until someone enters the LUKS passphrase at the VM console. Plan for this in your operations procedures.
Solace includes two levels of patch management:
apt-get dist-upgrade, reboot nodes one at a time, and return VMs. Requires SSH access from the Solace VM to each PVE node.If you want automated patch execution, the following needs to be set up on each PVE node. Your support partner will provide the exact script with the SSH public key pre-filled.
| Requirement | Details |
|---|---|
| SSH access | The Solace VM must be able to reach each PVE node on port 22. This is the same network that already carries PVE API traffic on port 8006. |
| solace-patch user | A dedicated system user on each PVE node with SSH key authentication. The SSH key is generated in Solace's encrypted Vault. |
| Sudoers entry | The solace-patch user needs passwordless sudo for apt-get and needrestart only — no other commands. |
| PVE API token update | The existing Solace API token needs VM.PowerMgmt and Sys.PowerMgmt privileges added for VM drain/return and node reboot operations. |
Your support partner will walk you through the setup and verify everything with a built-in validation check before any patching is attempted. Patch scanning (visibility only) works immediately with no additional setup.
Nodes are patched one at a time in the order you set. For each node, Solace:
noout flags to prevent unnecessary rebalancingapt-get dist-upgrade to install all pending updatesThe process is fully automated but can be paused, resumed, or cancelled at any point from the Solace web UI.
VMware migration is available only to eligible customers. Your support partner will enable this feature for your appliance if applicable. If you're interested in migrating from VMware to Proxmox, discuss this with your support partner.
For customers migrating from VMware to Proxmox VE, Solace includes a comprehensive migration tool that orchestrates the entire process directly from the appliance. The tool is designed to handle a wide range of scenarios — from fully automated migrations of standard Linux VMs to cautious import-only transfers of critical systems for manual review.
The migration tool requires some infrastructure in place before use:
| Requirement | Details |
|---|---|
| Shared NFS datastore | Mounted on both VMware ESXi hosts and Proxmox VE nodes. VMs are relocated to this datastore before disk import into PVE. |
| Dedicated PVE migration token | A separate API token configured within the appliance. Requires a role with: VM.Allocate, VM.Config.HWType, VM.Config.Options, VM.Config.Disk, VM.Config.CPU, VM.Config.Memory, VM.Config.Network, VM.Config.CDROM, VM.PowerMgmt, Datastore.AllocateSpace, Datastore.Audit, SDN.Use. The NFS storage must have import content type enabled. |
| Network connectivity | The Solace VM must be able to reach both vCenter/ESXi (port 443) and the target PVE cluster API (port 8006). |
Your support partner will help you set up the NFS datastore, PVE migration token, and vCenter connectivity as part of the migration engagement.
The Snapshots page scans all VMs and containers across your clusters for stale snapshots. Stale snapshots waste storage, slow down migrations, and can cause issues during backups.
If you use NetBox for infrastructure documentation, Solace can synchronize your PVE cluster inventory to NetBox. Clusters, nodes, and VMs are mapped to NetBox objects with a review-before-push workflow.
| Requirement | Details |
|---|---|
| NetBox instance | NetBox 4.0+ with API access. The URL and API token are configured in the Solace UI. |
| API token | A NetBox API token with read/write access to DCIM, Virtualization, IPAM, and Extras. Stored in Vault. |