From pfSense to UniFi Independent Gateway

Back around April of 2021, I moved from my SonicWall TZ205W to pfSense running on a whitebox “router” chassis (see https://blog.pelleys.com/?p=652). This lasted for some time until that chassis started having some issues I upgraded and in May 2023 to a Netgate 6100 (see https://blog.pelleys.com/?p=988). However, ever since I started in the Ubiquiti ecosystem with a UniFi AP Lite back in February 2019, I liked the full ecosystem but on the firewall side UniFi “just didn’t have it”. It gave a nice single pane of glass but didn’t have the full capabilities of a “proper” firewall.

All that started to changed in December 2024 with the release of UniFI Network v.9 and by the end of 2025 UniFi Network v.10 seemed to have finally “hit it.” Then Ubiquiti came out with the UniFi Cloud Gateway (UCG) Fiber (damn… I keep wanting to spell that the correct way: FIBRE). As some YouTubers have ask: Who is this made for? I mean, the UniFi Dream Machine (UDM) Pro Max and the UniFi UCG Fiber can both basically push full IDS/IPS packets at the same rate. The UDM Pro Max is made for more clients, has more ports, has two internal HDD/SDD drives v. a single NVMe. But, the UCG Fiber has a more modern Quad-core ARM Cortex-A73 at 2.2 GHz while the UDM Prox mas has an older Quad-core ARM Cortex-A57 at 2.0 GHz. Plus you have the price difference: UDM Pro Max is CDN$799 as opposed to the UCG Fibre at CDN$379 (as of January 31, 2026; RAM, etc. pricing increases will likely push this – much? – higher).

So, with that in mind, I decided that I would give myself for Christmas (I think I was good enough this year 😊) a UCG Fiber. The UCG would also let migrate from my self-hosted UniFi network installation. A note on this: one of my biggest concerns in moving pfSense to UCG was impacing (a) my existing switch configurations and vlans and (b) working through re-creating my firewall rules without
inadvertently exposing my network (more on my approach below).

With great expectation, I awaited Black Friday. An low and behold, the UCG Fiber was one sale… BUT, it wasn’t. It was UXG Fiber. All the same capabilities as the UCG Fiber but without UniFi network built in (or the ability to have internal UniFi Protect storage). The price was right at CDN$279, so I clicked “Place Order.” What the hell, I have experience setting up the self-hosted UniFi Network…

Then, second thoughts came around. While I could set up a VM on my laptop, configure a UniFi Network controller, import the pre-existing configuration, update it, add the required firewall rules, change the gateway from 3rd party to the UXG, etc. doubts kept creeping in. Then I remembered that Ubiquiti has a solution: the CloudKey Gen2 Plus (UCK). From the first AP Lite, the CloudKey has come to mind more than once as it would remove the need for Proxmox to be up to ensure that the UniFi Network was up. But, I kept returning to good enough is good enough. This time, push came to shove and I ordered a CloudKey Gen2 Plus (with the 1TB SSD) as an early birthday present. One of these days I might buy a UniFi camera or two, so there is that… Maybe some day…

Anyway, with my UXG Fiber and CloudKey Gen2 Plus I began to plan my migration. With lots of web searches, posting to forums, etc. I did not have what I considered a good approach. With that in mind, my approach is below. I suspect that it will also work with setting up a self-hosted UniFi Network controller as well.

  1. Back up and download your configuration from your self-hosted controller.
  2. Plug your computer into one of the LAN ports on the UXG (I reserved port 4 which has PoE and would power the UCK which does not come with a power source, not that it was an issue in my case).
    • This may be optional but as I do not use the 192.168.1.0/24 network (and my gateways are always 254 for… reasons), I needed to change the LAN settings to my “core” network subnet with the UXG being .254.
  3. Once that is done, plug the UCK into port 4 of the UXG and let it boot. It will get an DHCP address from your “correct” network (nice that it shows up on the UCG’s display).
    • Run through the normal setup.
    • Do not adopt the UXG this seems to put yourself in some type of circular argument.
  4. Make sure you set up a new UniFi site – this may help you during setup (e.g., compare what is in production v. what you want to do).
  5. Restore the working configuration you downloaded from your production self-hosted controller.
    • You will see your UniFi devices as being offline – which they are for the UCK but this shows that your configuration has been successfully imported.
  6. Change the gateways for the vlans from 3rd party gateway to the UXG.
    • You will have to set the network gateways (.254 in my case) as these were previously provided by your old firewall/router.
  7. If you can export your DHCP reservations, you can import them from a .csv (noice).
    • The format is: MAC Address | IP Address | Hostname (FQDN) | Local DNS Record (Left this blank) | Lease Type (Fixed or Dynamic) | Name Expiration Time (Left this blank)
  8. You will have to manually add your internal DNS records (note that this is a type of “policy” in UniFi land).
  9. Assign your network to the appropriate zones and/or create new zones.
  10. Create the appropriate firewall rule policies (similar to what you would do on a “traditional” firewall but the policies can be applied to zones and all the network(s) in the zone).
  11. Once you are satisfied and have backed up both the new and old UniFi controller configurations:
    • Power down your old UniFi controller. I made sure to (1) remove it from auto start and (2) for good measure change the IP address to something not used.
    • Unplug your current firewall
    • Plug in the UXG and UCK into the current network. Let it boot.
    • Wait for all your original UniFi devices to “come back online”.
    • Start testing everything.

You will probably have to spend a couple of hours tweaking your firewall rules – I did.

One thing that I really would like Ubiquiti to do is to create a 19″ rack mount kit for the UXG/UCG. They have one for the CloudKey Gen2 (both the original and the Plus). Maybe is would cause “confusion” between the UDM Pro Max and the UCG/UXG? That said, there are more than a few 3D printer files available. I found one with a “holder” for the external power brick plus 5 keystone ports which I had one of the guys at work print out for $50. (Thanks Sheldon!).

One added note: With pfSense (and I think it a pfSense/BSD issue rather than a Netgate one) if I had any blip from a WAN disconnet/reconnet to the Bell Giga Hub to a power outage I would have to perform the following incantation to get Internet connectivity with the Advanced DMZ feature:

  1. On pfSense go to the Interface and release the WAN IP
  2. Go to the Giga Hub and and turn off Advanced DMZ
  3. Back to pfSense and renew the WAN IP – it will get an internal IP (e.g., double NAT)
  4. To the Giga Hub and enable Advanced DMZ, select the pfSense MAC
  5. And back to pfSense and release and renew the DHCP WAN addresses to get the public IP

With the UniFi UXG, this foolishness is no longer needed! I tried it – twice!

Posted in Uncategorized | Leave a comment

What a month…

Shortly after a wonderful vacation in Paris – despite it being so hot (35C plus) that the trees were dropping leaves like it was fall – one of the ports on my 10-month-old Unifi USW Aggregation switch failed. It was carrying all the traffic from one of my Proxmox hosts and where the the connection to the core network connected to the Netgate 6100 firewall. This meant a whole lot of reconfiguration in a little bit of a rush. And the VMs really “missed” 10 GbE speed.

The Ubiquiti RMA process was pretty quick. Within five minutes of putting my RMA request in noting that it was a failed port (port failure was one of the options) my RMA was approved. The only downside is you have to pay, I guess because I have the standard warranty, shipping back to Ubiquiti. So, with the shipping to Ontario, time for them to verify receipt, and sending it back (no charge), it took a couple of weeks. Of course, there were acouple of weekends were involved.

While I was waiting one of the 3TB drives in my DS216+II which I use for my second level backup. I’ve had the DS216+II for almost exactly 8 years, started failing with over 90,000 hours on it. I previously had used it my FreeNAS server. Obviously, that was not under warranty. Two days after getting my replacement (only CDN$90) the drive failed. Or, “crashed” according to DSM. As always, drive replacement with DSM is seamless.

Anyway, today I got around to replacing the USW Aggregation. A little less rushed this time as I woke up at 6:00 AM so as to not annoy my lovely wife. One thing that I realized I have to do is to label the DAC cables so I am now looking for a good labeller.

Posted in Uncategorized | Leave a comment

Upgrade to Bell Gigahub 1 Gb/s and New DL380 Gen9

After over 10 years of faithful service, and more than a few call from Bell, I have upgraded the old Actiontec R1000H to the Bell Gigahub and moved from the 500 Mb/s/500 Mb/s to the 1 Gb/s/1 Gb/s service. Luckily, there was no install fee, no upgrade fee and no increase in the monthly cost. I guess that Bell is getting to the point where they were as anxious as I was to get rid of an over 10 year old piece of equipment as I was.

Moving from the old system is interesting: (1) The ONT and router are now integrated and (2) There is NO battery backup (I have an APC 1500 SmartUPS just for the Bell equipment – more than enough runtime 🙂 ).

I had a fairly high level of concern when upgrading as there were many, many, many posts on issues that people were having with Advanced DMZ such as not getting the correct IP, not working at all, etc. These posts were mostly from central Canada where they use PPPoE. Bell in Atlantic Canada used DHCP and this makes a big difference. Advanced DMZ works without issue. On the Netgate 6100 I did release/renew the DHCP lease (I received the same IP), restarted OpenVPN, restarted HAProxy and refreshed Dynamic DNS for good measure. I guess I could have rebooted the 6100 but this worked.

The speed is note quite 1 Gb/S but very consistent using SpeedTest.net to Bell Mount Pearl, St. John’s, Dartmouth, Montreal and Toronto, Eastlink St. John’s, and Rogers Halifax. Below is with some YouTube, etc. happening.

The second new thing is that the old HP DL360P Gen8 is now gone. It was getting a little old as well plus the 1U fan whine was getting to me. I replaced it with another DL380 Gen9. This one is spec’ed with 2 x E5-2630 V3 8-Core 2.4GHz Intel Xeons, 128GB RAM and 8 450GB 10K disks. It also came with the rails (new-in-box! – usually $100 extra with other sellers on eBay). I also picked up a couple of 3m 10 Gb/s DAC cables. You need to have the 3m length especially when using cable arms. (I need to pick up a couple more for the original DL380 which also has a cable arm.)

Posted in Uncategorized | Leave a comment

Proxmox 8.3 add HPE Smart Storage Admin CLI Tools

I used hardware RAID on my HP DL380s via the Smart Array P440ar Controller. As I noted on my previous posts, my VM storage in on the RS1221+ NAS via NFS so ZFS is not a concern. Storage on the server is just for booting Proxmox and maybe some ISO storage.

I would like mange the RAID arrays when I have to replace a drive (future post coming). I found a project on GitHub by mrpeardotnet but that was based on Proxmox 6.x so the instructions are not quite right. Here are my updates:

When you add the repository (being 2025) you need to update the following for TLS and Proxmox 8.3 which uses Debian Bookworm:

echo "deb https://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" > /etc/apt/sources.list.d/hp-mcp.list

Then add the GPG keys:

curl https://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
curl https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -
curl https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key2.pub | apt-key add -

The rest of the mrpearldotnew instructions for the SSA commands are in his GitHub project: https://gist.github.com/mrpeardotnet/a9ce41da99936c0175600f484fa20d03

Posted in Uncategorized | Leave a comment

Proxmox Datacenter Manager

If you haven’t noticed, Proxmox released its Datacenter Manager back on December 19, 2024 as alpha version 0.1. For an alpha release, it is amazingly stable. So far, it is up to version 01.11. It is not feature complete as the roadmap (https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Roadmap) notes.

That said, one feature that works really well is the migrate function (paper airplane icon highlighted in yellow).

This means you don’t have to set up a Proxmox cluster which is beneficial if you have one or more nodes that are usually shut down for power savings (or noise – my DL360p Gen8 is 1U loud). It does not have the failover you get with a cluster.

One question I have not figured out – with some Google University searching – is what interface is used to migrate the LXC containers or KVM VMs. It seems that it uses the admin (Proxmox VE console) network which is, in my case, 1 GbE rather than my (new) 10 GbE network.

It also doesn’t yet have the ability to add Proxmox Backup Server (which I have also migrated to – as a VM rather than a dedicated server). It is on the roadmap.

If you run more than one Proxmox server, this Proxmox Datacenter Manager is a great move forward. Especially in the enterprise market (i.e., XCP-NG).

Posted in Uncategorized | Leave a comment

Eaton 5PX – New Battery Fixed It

After taking a good look at the circuit boards in the 5PX, I was pleasantly surprised to find no leaking or bulging capacitors. I checked a few capacitors in circuit, and they measured acceptable. I “bit the bullet” and purchased four third-party batteries (literally 15% the cost of OEMs from Eaton). I transferred the new batteries paying close attention to how they needed to be wired and everything works just fine. It still seems like a great deal.

Posted in Uncategorized | Leave a comment

EliteDesk 705 G3 Mini, Proxmox 8.3.3, proxmox-kernel-6.8.12-7-pve and  Broadcom BCM5762 tg3 Blacklist

Well, I updated the Mini G3 to the latest Proxmox 8.3.3 with proxmox-kernel-6.8.12-7 and it seems that the need to blacklist the tg3 module is gone. In fact, with it blacklisted, you no longer have any networking. Removing tg3 from the blacklist restores networking.

The issue in the previous post, “There seems to be a delay in the network configuration to come fully up and the console show the wrong IP” is gone. This is the result of a bit of stupidity on my part. When I set up Proxmox it uses the default configuration and this provides the IP address from my core network subnet. When I update the network configuration per my previous post, it does not update /etc/hosts. A manual update fixes that. Ooops…

Posted in Uncategorized | Leave a comment

Proxmox, Single NIC, VLANs and NAS Storage

I think that I have finally figured out how to get my EliteDesk 705 G3 Mini PC working. Part of the problem is the Broadcom BCM5762 NIC in which case you have to (a) blacklist the NIC driver tg3 and (b) ensure that you have added GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt" as detailed in my post https://blog.pelleys.com/?p=961.

It seems that you have to blacklist the tg3 driver as the BCM5762 is buggy as all get out – VLANs do not even seem to work (for me, at least) if tg3 is not blacklisted. You seem to need iommu=pt otherwise you will get a “transmit queue 0 timed out” error and lose networking.

It is not perfect. There seems to be a delay in the network configuration to come fully up and the console show the wrong IP. In my case it gets an IP from the native VLAN but then reverts to the set IP, subnet and VLAN.

With that out of the way, I really had to think through the logic of what I was trying to do. Here is the scenario:

  • Single NIC
  • NFS storage on a different subnet and VLAN
  • Need to have the ability for VMs to join different VLANs

The key part of the logic is to remember that NFS needs to be the native bridge interface otherwise NFS tries to connect using the wrong network. Here is an example configuration in/etc/network/interfaces:

auto vmbr0
iface vmbr0 inet static
  address 10.1.1.1/24 ← This is the NFS host IP for the Mini G3
  bridge-ports enp1s0 ← This is the host physical NIC (normal)
  bridge-stp off
  bridge-fd 0
  bridge-vlan-aware yes ← Make sure the bridge is configured for VLANs
  bridge-vids 100 200 300 400 ← I manually restrict to my VM VLANs
# This is the administrative interface and forcing VLAN for it
auto vmbr0.200
iface vmbr0.200 inet static
  address 10.1.200.5/24
  gateway 10.1.200.254

On your switch, UniFi in my case, you need to set the Native VLAN/Network to the network your NFS storage is on (called “NFS-VLAN (100)” in my case) and Tagged VLAN Management to Allow All.

This seems to work even even when using Proxmox Datacenter Manager migration (PDM) of VMs with no transmit queue time out. On that point, even though PDM is currently alpha code, it shows real promise. I like that I do not have to set up a cluster to move VMs between nodes (my HP DL360 Gen8 is usually off – too “whiny” fans and I don’t pay the power bill.

Hope this helps someone!

Posted in Uncategorized | Leave a comment

State of the Rack – January 2025

The home lab journey continues. I was a “good enough” boy last year, so Santa dropped off a Unifi Aggregation Switch. This completes my back end “core” for my standard networking and NFS storage for my Proxmox servers.

I also updated the uplink to the Netgate 6100 to connect to the USW Aggregation at 10 GbE. (One day I will get around to upgrading my Internet link to over 1 Gb/s but based upon usage, that is low on priority list.

A hint of this was my previous post where I was totally confused as to which port was ix1. I always have the first port, ix0 in this case, as my LAN connection.

Here is the basic setup:

I picked up a brush panel as it is a nice way pass the DAC cables through. While I still have a couple of fibre connections between the switches, going forward DAC cables will be the way I will be proceeding unless I have some obscenely long run.

Anyway, here is what the rack looks like as of January 2025:

On a sad note, it looks like a capacitor on the Eaton 5PX 2200 has failed. Once the 5PX goes on battery the unit goes into battery failure, even though the battery is good. It seems to be a fairly “well known” issue, so I have to replace that. No good deal is ever that good, unfortunately.

Posted in Uncategorized | Leave a comment

Netgate 6100 SFP Port Assignments

Hi Folks – an FYI on the Netgate 6100 port assignments. This really messed me up as the SFP port numbering is not clear – at least not clear to me – in the documentation. So, in case anyone else is confused by this, here are the port assignments

  • ix3 (SFP and RJ45 shared – 1GbE)
  • ix2 (SFP and RJ45 shared – 1GbE)
  • ix1 (SFP+ – 10GbE)
  • ix0 (SFP+ – 10GbE)
Posted in Uncategorized | Leave a comment