I think that I have finally figured out how to get my EliteDesk 705 G3 Mini PC working. Part of the problem is the Broadcom BCM5762 NIC in which case you have to (a) blacklist the NIC driver tg3
and (b) ensure that you have added GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"
as detailed in my post https://blog.pelleys.com/?p=961.
It seems that you have to blacklist the tg3
driver as the BCM5762 is buggy as all get out – VLANs do not even seem to work (for me, at least) if tg3
is not blacklisted. You seem to need iommu=pt otherwise you will get a “transmit queue 0 timed out” error and lose networking.
It is not perfect. There seems to be a delay in the network configuration to come fully up and the console show the wrong IP. In my case it gets an IP from the native VLAN but then reverts to the set IP, subnet and VLAN.
With that out of the way, I really had to think through the logic of what I was trying to do. Here is the scenario:
- Single NIC
- NFS storage on a different subnet and VLAN
- Need to have the ability for VMs to join different VLANs
The key part of the logic is to remember that NFS needs to be the native bridge interface otherwise NFS tries to connect using the wrong network. Here is an example configuration in/etc/network/interfaces
:
auto vmbr0
iface vmbr0 inet static
address 10.1.1.1/24 ← This is the NFS host IP for the Mini G3
bridge-ports enp1s0 ← This is the host physical NIC (normal)
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes ← Make sure the bridge is configured for VLANs
bridge-vids 100 200 300 400 ← I manually restrict to my VM VLANs
# This is the administrative interface and forcing VLAN for it
auto vmbr0.200
iface vmbr0.200 inet static
address 10.1.200.5/24
gateway 10.1.200.254
On your switch, UniFi in my case, you need to set the Native VLAN/Network to the network your NFS storage is on (called “NFS-VLAN (100)” in my case) and Tagged VLAN Management to Allow All.
This seems to work even even when using Proxmox Datacenter Manager migration (PDM) of VMs with no transmit queue time out. On that point, even though PDM is currently alpha code, it shows real promise. I like that I do not have to set up a cluster to move VMs between nodes (my HP DL360 Gen8 is usually off – too “whiny” fans and I don’t pay the power bill.
Hope this helps someone!