Upgrade to Bell Gigahub 1 Gb/s and New DL380 Gen9

After over 10 years of faithful service, and more than a few call from Bell, I have upgraded the old Actiontec R1000H to the Bell Gigahub and moved from the 500 Mb/s/500 Mb/s to the 1 Gb/s/1 Gb/s service. Luckily, there was no install fee, no upgrade fee and no increase in the monthly cost. I guess that Bell is getting to the point where they were as anxious as I was to get rid of an over 10 year old piece of equipment as I was.

Moving from the old system is interesting: (1) The ONT and router are now integrated and (2) There is NO battery backup (I have an APC 1500 SmartUPS just for the Bell equipment – more than enough runtime 🙂 ).

I had a fairly high level of concern when upgrading as there were many, many, many posts on issues that people were having with Advanced DMZ such as not getting the correct IP, not working at all, etc. These posts were mostly from central Canada where they use PPPoE. Bell in Atlantic Canada used DHCP and this makes a big difference. Advanced DMZ works without issue. On the Netgate 6100 I did release/renew the DHCP lease (I received the same IP), restarted OpenVPN, restarted HAProxy and refreshed Dynamic DNS for good measure. I guess I could have rebooted the 6100 but this worked.

The speed is note quite 1 Gb/S but very consistent using SpeedTest.net to Bell Mount Pearl, St. John’s, Dartmouth, Montreal and Toronto, Eastlink St. John’s, and Rogers Halifax. Below is with some YouTube, etc. happening.

The second new thing is that the old HP DL360P Gen8 is now gone. It was getting a little old as well plus the 1U fan whine was getting to me. I replaced it with another DL380 Gen9. This one is spec’ed with 2 x E5-2630 V3 8-Core 2.4GHz Intel Xeons, 128GB RAM and 8 450GB 10K disks. It also came with the rails (new-in-box! – usually $100 extra with other sellers on eBay). I also picked up a couple of 3m 10 Gb/s DAC cables. You need to have the 3m length especially when using cable arms. (I need to pick up a couple more for the original DL380 which also has a cable arm.)

Posted in Uncategorized | Leave a comment

Proxmox 8.3 add HPE Smart Storage Admin CLI Tools

I used hardware RAID on my HP DL380s via the Smart Array P440ar Controller. As I noted on my previous posts, my VM storage in on the RS1221+ NAS via NFS so ZFS is not a concern. Storage on the server is just for booting Proxmox and maybe some ISO storage.

I would like mange the RAID arrays when I have to replace a drive (future post coming). I found a project on GitHub by mrpeardotnet but that was based on Proxmox 6.x so the instructions are not quite right. Here are my updates:

When you add the repository (being 2025) you need to update the following for TLS and Proxmox 8.3 which uses Debian Bookworm:

echo "deb https://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" > /etc/apt/sources.list.d/hp-mcp.list

Then add the GPG keys:

curl https://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
curl https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -
curl https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key2.pub | apt-key add -

The rest of the mrpearldotnew instructions for the SSA commands are in his GitHub project: https://gist.github.com/mrpeardotnet/a9ce41da99936c0175600f484fa20d03

Posted in Uncategorized | Leave a comment

Proxmox Datacenter Manager

If you haven’t noticed, Proxmox released its Datacenter Manager back on December 19, 2024 as alpha version 0.1. For an alpha release, it is amazingly stable. So far, it is up to version 01.11. It is not feature complete as the roadmap (https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Roadmap) notes.

That said, one feature that works really well is the migrate function (paper airplane icon highlighted in yellow).

This means you don’t have to set up a Proxmox cluster which is beneficial if you have one or more nodes that are usually shut down for power savings (or noise – my DL360p Gen8 is 1U loud). It does not have the failover you get with a cluster.

One question I have not figured out – with some Google University searching – is what interface is used to migrate the LXC containers or KVM VMs. It seems that it uses the admin (Proxmox VE console) network which is, in my case, 1 GbE rather than my (new) 10 GbE network.

It also doesn’t yet have the ability to add Proxmox Backup Server (which I have also migrated to – as a VM rather than a dedicated server). It is on the roadmap.

If you run more than one Proxmox server, this Proxmox Datacenter Manager is a great move forward. Especially in the enterprise market (i.e., XCP-NG).

Posted in Uncategorized | Leave a comment

Eaton 5PX – New Battery Fixed It

After taking a good look at the circuit boards in the 5PX, I was pleasantly surprised to find no leaking or bulging capacitors. I checked a few capacitors in circuit, and they measured acceptable. I “bit the bullet” and purchased four third-party batteries (literally 15% the cost of OEMs from Eaton). I transferred the new batteries paying close attention to how they needed to be wired and everything works just fine. It still seems like a great deal.

Posted in Uncategorized | Leave a comment

EliteDesk 705 G3 Mini, Proxmox 8.3.3, proxmox-kernel-6.8.12-7-pve and  Broadcom BCM5762 tg3 Blacklist

Well, I updated the Mini G3 to the latest Proxmox 8.3.3 with proxmox-kernel-6.8.12-7 and it seems that the need to blacklist the tg3 module is gone. In fact, with it blacklisted, you no longer have any networking. Removing tg3 from the blacklist restores networking.

The issue in the previous post, “There seems to be a delay in the network configuration to come fully up and the console show the wrong IP” is gone. This is the result of a bit of stupidity on my part. When I set up Proxmox it uses the default configuration and this provides the IP address from my core network subnet. When I update the network configuration per my previous post, it does not update /etc/hosts. A manual update fixes that. Ooops…

Posted in Uncategorized | Leave a comment

Proxmox, Single NIC, VLANs and NAS Storage

I think that I have finally figured out how to get my EliteDesk 705 G3 Mini PC working. Part of the problem is the Broadcom BCM5762 NIC in which case you have to (a) blacklist the NIC driver tg3 and (b) ensure that you have added GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt" as detailed in my post https://blog.pelleys.com/?p=961.

It seems that you have to blacklist the tg3 driver as the BCM5762 is buggy as all get out – VLANs do not even seem to work (for me, at least) if tg3 is not blacklisted. You seem to need iommu=pt otherwise you will get a “transmit queue 0 timed out” error and lose networking.

It is not perfect. There seems to be a delay in the network configuration to come fully up and the console show the wrong IP. In my case it gets an IP from the native VLAN but then reverts to the set IP, subnet and VLAN.

With that out of the way, I really had to think through the logic of what I was trying to do. Here is the scenario:

  • Single NIC
  • NFS storage on a different subnet and VLAN
  • Need to have the ability for VMs to join different VLANs

The key part of the logic is to remember that NFS needs to be the native bridge interface otherwise NFS tries to connect using the wrong network. Here is an example configuration in/etc/network/interfaces:

auto vmbr0
iface vmbr0 inet static
  address 10.1.1.1/24 ← This is the NFS host IP for the Mini G3
  bridge-ports enp1s0 ← This is the host physical NIC (normal)
  bridge-stp off
  bridge-fd 0
  bridge-vlan-aware yes ← Make sure the bridge is configured for VLANs
  bridge-vids 100 200 300 400 ← I manually restrict to my VM VLANs
# This is the administrative interface and forcing VLAN for it
auto vmbr0.200
iface vmbr0.200 inet static
  address 10.1.200.5/24
  gateway 10.1.200.254

On your switch, UniFi in my case, you need to set the Native VLAN/Network to the network your NFS storage is on (called “NFS-VLAN (100)” in my case) and Tagged VLAN Management to Allow All.

This seems to work even even when using Proxmox Datacenter Manager migration (PDM) of VMs with no transmit queue time out. On that point, even though PDM is currently alpha code, it shows real promise. I like that I do not have to set up a cluster to move VMs between nodes (my HP DL360 Gen8 is usually off – too “whiny” fans and I don’t pay the power bill.

Hope this helps someone!

Posted in Uncategorized | Leave a comment

State of the Rack – January 2025

The home lab journey continues. I was a “good enough” boy last year, so Santa dropped off a Unifi Aggregation Switch. This completes my back end “core” for my standard networking and NFS storage for my Proxmox servers.

I also updated the uplink to the Netgate 6100 to connect to the USW Aggregation at 10 GbE. (One day I will get around to upgrading my Internet link to over 1 Gb/s but based upon usage, that is low on priority list.

A hint of this was my previous post where I was totally confused as to which port was ix1. I always have the first port, ix0 in this case, as my LAN connection.

Here is the basic setup:

I picked up a brush panel as it is a nice way pass the DAC cables through. While I still have a couple of fibre connections between the switches, going forward DAC cables will be the way I will be proceeding unless I have some obscenely long run.

Anyway, here is what the rack looks like as of January 2025:

On a sad note, it looks like a capacitor on the Eaton 5PX 2200 has failed. Once the 5PX goes on battery the unit goes into battery failure, even though the battery is good. It seems to be a fairly “well known” issue, so I have to replace that. No good deal is ever that good, unfortunately.

Posted in Uncategorized | Leave a comment

Netgate 6100 SFP Port Assignments

Hi Folks – an FYI on the Netgate 6100 port assignments. This really messed me up as the SFP port numbering is not clear – at least not clear to me – in the documentation. So, in case anyone else is confused by this, here are the port assignments

  • ix3 (SFP and RJ45 shared – 1GbE)
  • ix2 (SFP and RJ45 shared – 1GbE)
  • ix1 (SFP+ – 10GbE)
  • ix0 (SFP+ – 10GbE)
Posted in Uncategorized | Leave a comment

Winter is Coming (and Some Deals are Too Good to Pass Up)

Eaton 5PX 2200 (top)
Lenovo RT1.5KVA 2U Rack
Eaton 5PX 2200 (top) Lenovo RT1.5KVA 2U Rack (bottom)

Ok… The truth is I was scrolling through Facebook Marketplace and I saw a an Eaton 5PX2200 UPS for sale. It was rackmount and the pictures looked good. But for an impulse purchase the price was a little high. A few weeks went by and the price dropped a bit – getting closer. Then this weekend it dropped to a price that, even if the batteries were shot, was worth it.

(I should note that the Lenovo is a rebadged Eaton unit. The network module reports Eaton.)

I dropped over and not only did the battery seem to be fine (there was barely enough charge left to check), it looked good (a little pushed in on the back where the power cord comes in), no scratches, and it has the rails (FYI – they only go to 36″ and HP servers need 38″, so too short and I had to use the shelf rails that came with the rack), manuals and CDs. The story the seller told me he took the UPS in trade for some gaming PC parts thinking that using for his gaming PC. Considering any higher end gaming PC draws more than my DL380 Gen9 with dual power supplies that makes sense. Anyway, the Eaton draws 20A which requires the plug below which the seller did not have.

I have a 20A outlet in my home office (future planning 😉 ) so no cord cutting for me.

After moving the DL380 to the Eaton, with the rest of the gear left on the Lenovo RT1.5KVA, here are the runtime (sorry, I didn’t bother to match the brightness and contrast):

Well, that is nice – all ready for winter and any shorter power outages.

Here’s where we are now:

Only thing that is left is my 10GbE switch. And with Winter comes Christmas 🙂

Posted in Uncategorized | Leave a comment

Another Flashback from 2001

I was going through some old web backups (because of the last post) and found a picture (320×240 and I scaled it up a bit for this post – I think we had a Sony Mavica digital camera).

Here’s some of what I had put in place just before I left for a new job:

  • A SAN with Hewlett-Packard FC60/SC10 disks and Brocade Silkworm-based Fibre Channel interconnections
  • Hewlett-Packard NetServers
  • Open Storage Solutions servers and external disk arrays
  • Custom-built Pentium II / III rack-mount servers
  • Digital AlphaServer-based network support systems (internal/external DNSes)
  • Exabyte 220 tape library system
  • Yes, that is a Cisco PIX 1000 (or 500, I can’t actually remember but it used 3.5″ floppies for backups)

Edit: I just realized that there is an IBM workstation running RealProducer (lonely black tower in the middle of the picture) with a whitebox server to the left running RealServer. Those were the days 🙂

Posted in Uncategorized | Leave a comment