10GbE iSCSI – Nice and Quick

I’ve been running iSCSI between my DL380 Gen9 VMware ESXi 7.0.3 server and Synology RS1221+ NAS at 1GbE. It works okay but as one would expect it is not all that quick. For my birthday I received a Synology E10G21-F2 10GbE dual SFP+ Port adaptor:

The DL380 has a HP FlexFabric 10Gb 2-port 554FLR-SFP Adapter that I got for Christmas:

While I have used OM3 fibre on my UniFi switches, this time I decided to go with DAC cables. Fibre is, well, neat but DAC is simpler with no issues of dirty fibre connectors, etc. Probably a little cheaper as well. I have redundant connections between the ESXi server and the NAS.

The start up of the VMs seems a little faster to start but is a little hard to judge. A Windows Server 2022 VM using CrystalDiskMark showed the following results:

On the Synology NAS, here’s the results:

Note that these scores are in megabytes per second, so we need to multiply by 8:
* Windows Server: 5,678.64 megabits per second
* Synology NAS: 5,388.8 megabits per second
Not bad with the overhead of a VM.

Note that there is no switch involved. I have a direct connection between ESXi and the NAS. Eventually I may add a 10 GbE switch (likely a UniFi USW-Aggregation) and a 10 GbE card to the old DL360 Gen8. I can still connect the DL360 to the iSCSI targets using 1 GbE. The DL360 is only for testing anyway.

Posted in Uncategorized | Leave a comment

ESXi Patching – Deprecated CPUs

Another aide-mĂ©moire so I don’t have to search 🙂

esxcli network firewall ruleset set -e true -r httpClient

esxcli software profile update -p [PATCH_LEVEL} \
--no-hardware-warning \
-d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

esxcli network firewall ruleset set -e false -r httpClient

Posted in aide-mémoire, Uncategorized | Leave a comment

THIS Goes Here :-)

I decided to treat myself. It was definitely not the way I wanted to treat myself as it was from my inheritance from the passing of my father in October 2020. But, after my sister and myself got the estate straightened away – no issues but it is a great deal of work – my sister’s advice to do something for myself made sense.

So… My “treat” is an HP DL380 Gen9.

The specs are:

  • 2 x Xeon E5-2687W V3 CPUs (25 MB cache, 3.10 GHz/3.50 GHz boost, 10 cores/20 threads)
  • 128GB DDR4 Registered ECC RAM
  • 4 x 400GB SSD SAS  2.5” drives
  • HP SmartArray P440AR RAID Controller  with FBWC
  • 4 X GbE network ports
  • 2 x 10 GbE via a HP FlexibleLOM FlexFabric 554FLR-SFP+ adapter

The DL360 Gen8 is a fine server but it is a little old. It also is rather loud when a fan fails. I think all 1U servers can have loud fans. The DL380 is 2U and the fans are much quieter.

The dual 10 GbE LOM card was icing on the cake. I run my VMs on the SAN although the four Hitachi Ultrastar SSD400M drives are mighty fast on the 12 Gbit/s SAS backplane (yes, they are a little old – more info here). My Synology RS1221+ has lots of storage – not super fast but good enough for a home lab. I use iSCSI (see this post for the background) and 1 GbE can be sluggish. I was planning on move to 10 GbE but I needed two dual port PCIe cards (one for the RS1221+ and the other for the server) and the cards needed to be compatible with Synology DSM and VMware ESXi 7.0. That was going to be a little pricey. I now just need the dual port card for the RS1221+ and a couple of DAC cables.

So far, I have tried out Proxmox 7 and ESXi 7. ESXi is an old friend – it just works. Proxmox is rather interesting. It has some fine features. I’m not quite sold on the different locations and file types that different file locations can use. I haven’t taken the time to find out why and there probably is a reason. Still, it is odd. Being open source is nice. I like having the source code open to independent audit. That said, ESXi is used by some, shall we say, very security conscious institutions. Like other open source projects like pfSense and TruNAS the devices supported always seems to be much broader and older hardware remains supported. I did have to switch P440AR controller to IT mode. I really like that feature – real IT mode with cache turned off. It does not seem to be a bunch of individual “RAID-0” arrays. ZFS could see all the SMART info. All said, I can appreciate those who endorse Proxmox.

This was the first time I experimented with a shared iSCSI LUN using VMFS. VMFS 6 is cluster aware so multiple ESXi hosts can share the storage. I simply had to allow the iSCSI LUNs on the Synology box multiple access and give permission to both the DL360 and DL380 to mount the LUNs. All you have to do is to register the ESXi VM on the new node and Bob’s your uncle. I did make sure that the network configuration and names are the same on both the DL360 and DL380. I did change the P440AR to IR mode and create a RAID-5 configuration. Yes, ESXi works just fine too.

Of note, the iSCSI issues that I had and wrote about previously did not occur at all – despite my trying to make it happen. This could be a result of either the latest ESXi update or the Synology DSM update or both.

I might give XCP-NG another try. My last attempt did not impress me. Not that XCP-NG isn’t any good – it is. It is just the way it does things. Maybe it is me but I just can’t get my head around XCP-NG.

Anyway, I won’t be making a decision on how I’ll be proceeding. I need to add the System Insight Display (SID).

Strange why it isn’t standard. That treat will be opened December 25th.

Posted in Uncategorized | Leave a comment

What Goes Here?

Fall is fully here. We’ve had our first snowfall. About 15-20 cm November 5th to 6th overnight. Most of it has melted – but not all. I actually put my snow tires on…

Anyway, I’ve been a little bored so…

Created with GIMP
Posted in Uncategorized | Leave a comment

ESXi 7.0 Mid-August Update

Since my last post, there have been some developments. Booting from USB went from being inconsistent to being a non-starter. So, back to the old faithful hard drive boot. I have four internal 300GB SAS drives in a RAID-5 configuration. Since I have my VMs on the Synology NAS using iSCSI, the internal drive array was never used for much. Sometimes I would have a specific VM that wanted to make sure would be up in the event that the NAS went down such as the UniFi controller. I realised that isn’t a real worry. Along with having backups on both NASes I have an offline backup where the configurations of pfSense, Unifi and ESXi are kept.

ESXi backups are important as noted in previous posts. For quick reference (maybe mine!) the instructions are provided by VMware in the KB article How to back up ESXi host configuration. More steps are needed than pfSense of UniFi but it is something I do on at least a monthly basis or before and after I make any major configuration changes.

I’m not sure if booting ESXi from USB is an issue with my DL360p Gen8 server, the USB thumb drive (and I tried several) or ESXi 7.x. I do know that booting from the internal drive array does work and I’ll stick with that.

I also updated to the most current patch level of ESXi after the change to booting from the internal drive array. The update to ESXi 7.0.2 (build number 17867351) went flawlessly – although as slow as always. A great resource (especially if you do not use vSphere) is VMware Front Experience‘s VMware ESXi Patch Tracker. Great resource – including step-by-step instructions in a pop-up.

Posted in Uncategorized | Leave a comment

ESXi 7.0 Upgrade to Update 2 and iSCSI

I realized last night that in my efforts to get ESXi up and running, I had installed the original release of ESXi 7.0. Since I had taken Friday off work to take care of a few things that needed to be done during the weekdays I decided to upgrade ESXi to Update 2. It also allowed me to test my theory from my previous post.

The update worked as planned (and boy-oh-boy are updates slow!) but was successful. As I expected the iSCSI share did not show up in ESXi. Let’s check the NAS and see if the targets are still there and if the ESXi server is connected. Yep, ESXi is shown as a host in DSM.

Okay, ssh into the ESXi box. Can I ping the NAS? Yes, working fine.

Log into the ESXi console (again, I’m not using vSphere). The software iSCSI sees the host and the target. But, no iSCSI devices are showing up under storage devices and obviously the datastores are not present.

Let’s try my theory from the last post. Try esxcfg-volume -l and get the UUIDs. Well now, iSCSI volumes are not showing up. Let’s try a few more pings to the NAS, rescan the devices. Nope, not working.

Time for some deep thinking. No panic this time as I know that the VMs are still in the iSCSI LUNs and I just have to get them mounted. And I have good backups!

Some more Googling and the first result from Reddit (vSphere 7.0 U2 iSCSI not working with older HP Lefthand SAN) seems to have the answer. It seems that the issue is around the IQN and how ESXi is handling this. I deleted the ESXi host on the NAS and re-created it. A rescan of the storage devices showed the iSCSI LUNs. I then did the esxcfg-volume -l and mounted, using esxcfg-volume -M UUID (with the “M” rather than the “m” option), the two UUIDs. Bang, the datastores reappears. While the article is for upgrades and not reboots, my gut is that this problem will persist until the bug is squashed. I haven’t tested a reboot because this is a pain in the arse and I have other things I have to do.

From the Reddit post, there is now a KB article from VMware on this issue (iSCSI adapter IQN may change during the upgrade of ESXi 7.0 U1 (84339)). I didn’t see the workaround (I’ll try it if a patch not available and I have to shutdown the ESXi or NAS) but here is that workaround example from the article:

To work around the issue:
Prior to the upgrade, use the esxcli get and set commands to set the generated iSCSI adapter IQN explicitly. As the IQN is a user setting it won’t change after the upgrade.
Get the IQN details:
$ esxcli iscsi adapter get -A vmhba67

vmhba67
   Name: iqn.1998-01.com.vmware:w1-hs3-n2503.eng.vmware.com:452738760:67


Set the IQN details:
$ esxcli iscsi adapter set -A vmhba67 -n iqn.1998-01.com.vmware:w1-hs3-n2503.eng.vmware.com:452738760:67


EDIT

I couldn’t wait for the next time that I needed to reboot the ESXi server. That is probably a good idea given that I probably don’t want to have a number of things going on at the same time. I applied the VMware KB work around and then shutdown the VMs and rebooted the server.

SUCCESS! The iSCSI datastores automatically came up. A gin and tonic with lots of ice may be in order!

Posted in Uncategorized | Leave a comment

Importance of Backups and Wednesday Trials and Tribulations

On Sunday, I upgraded my Synology DS216+II to DSM 7.0. I followed the DSM 7.0 reviews by Robert (Robbie) Andrews on NAS Compares and made the plunge starting with the DS216 – my backup NAS. Things went well so on Wednesday evening my main NAS, the RS1221+ was upgraded. As my VMware ESXi 7.0 has its VMs on the RS1221 using iSCSI, I shut down the ESXi server because past experience has shown me that the remote datastores “disappear” when the NAS shuts down and won’t reappear until the ESXi box is rebooted. (More on that later, I may have found a workaround.)

Anyway, the RS1221 DSM 7.0 upgrade went well. For both the DS216 (which is technically underspeced as it only has 512 MB of RAM and 1 GB is recommended but it is on the Synology compatibility list) and the RS1221 the new interface is really snappy. Login is much, much faster. So far, so good.

After about a half-hour – enough time for the RS1221 to “settle in” – I powered on the ESXi server. And discovered that my iSCSI datastores were no longer there. First thing to check was the RS1221 to make sure that iSCSI was running and the LUNs with the VMs was still there. Ok, everything fine there. I could see the ESXi host connected on the RS1221.

Back to the ESXi server. Under storage I could see that the Software iSCSI adapter was there. So were the iSCSI targets. But no datastores… Ok, rescan. No dice, still no datastores. Time to reboot.

And… ESXi would not come up. It could not find the internal USB thumb boot drive. A moment of panic until I remembered I had (not recently but recent enough) backed up the ESXi configuration (twice – once to the DS216 and once to the RS1221). As I do not use vCenter, I had to backup the configuration from the ESXi shell. This will save a lot of time and effort instead of starting the configuration from scratch. That information can be found in the VMware Knowledge Base: How to back up ESXi host configuration. Time to grab a couple of new USB thumb drives (one for the installer, one as the destination) and make a new boot drive.

And… I could not get the new installation (not the installer) to boot. I could boot from the installer thumb drive and the installer could see the destination thumb drive and install ESXi. But there was no way that the server would boot ESXi after installation. Manually selecting the thumb drive didn’t work either. By now it was getting late and I had to work in the morning. Off to bed.

Thursday evening, and after a lot of trial and error I remembered that the three thumb drives I was using as targets have been used as FreeNAS/TrueNAS installers. Now, TrueNAS uses FreeBSD as its base and that seems to do some strange things to the signatures on the drive. Even with deleting the partitions and formatting the drive as FAT32 didn’t help. Weird. Finally, I also recalled that when I was first installing FreeNAS I had a similar problem with disk signatures. Solution: remove the disk signature on the target thumb drive and all was well – ESXi would boot. Since I had the backup of the ESXi box’s configuration, I applied the backup and the configuration returned.

But… the iSCSI datastores were still not there. It was getting late again and I had to work Friday. Off to bed for a not-so-restful night.

After a lot of Googling I found an article on VirtualizationHowto.com by Brandon Lee titled VMware ESXi 6.5 Can’t Add Existing iSCSI LUN (from back in December 2016 for ESXi 6.5!) that was most helpful. It is great for troubleshooting. Anyway, the high level solution (details in the article) was to:

esxcfg-volume -l
esxcfg-volume -m UUID

Magic – the datastores reappeared. (I had to do it twice – once for each of the iSCSI targets. I think the next time using a “-M” to make it permanent over reboots. I’ll get around to that.) This might also fix the need to reboot the ESXi box when I have to reboot the NAS or have to reboot the switch when I upgrade the firmware.

Now, on backups:

Synology has a free application called Active Backup for Business. You have to register when you install it, but it is still free. Active Backup for Business lets you backup your VMs even if they are live. If you do not have vCenter (I don’t), you need to enable ssh and ESXi Shell for Active Backup to work. (An aside: After enabling ssh and ESXi Shell the settings did not stay set after reboot. I think that was a warning that the original USB thumb drive was starting to fail.) You also need to enable Changed Block Tracking (CBT) on the ESXi host to reduce the transferred data size and time for backup because CBT backs up only the blocks that are changed since the previous backup. Synology (and VMware) has a nice article on how to do this in the KB article How to enable CBT manually for a virtual machine. Unfortunately, I have not found how to automatically backup the ESXi configuration yet. This is a great solution for Synology users.

The other thing is that I am, one could say, religious on backing up configurations. The pfSense configuration is backed up pre and post changes and upgrades as is the UniFi configuration. I back them up to three locations: the DS216, RS1221 and on my “work” laptop. I also keep previous firmware available so I can rollback if needed (or available – the DSM upgrade can’t be rolled back). Backups are a good thing!

Posted in Uncategorized | Leave a comment

Final Upgrades Completed – Only Tweaks Left

As it finally started warming up I noticed that our electric bill was not going down as expected. We use electic heat (for hot water as well and we have a heat pump) so one would expect that as temperatures increased the power bill would go down. After some thought, I realized that I had two things in the rack that while helping keep the office warm during winter were not helping at all during the summer.

The lesser of the “problem” children was the HP DL360 Gen7. I used it only for testing – and that utility was greatly reduced when VMware ESXi 7.0 was no longer supported on its architecture – and it was only on for brief periods (it had not been on for over 45 days). While it was powered off, there was still power usage as it was not truely off but in standby mode.

The biggest problem was my Supermicro TrueNAS (FreeNAS) server. That server is a beast with old technology. The specs were:

  • Chassis: Supermicro SuperChassis 825TQ-R740LPB 2U 8 x 3.5″ Drive Bays
  • Power Supply: 2 x 740 Watt PWS-741P-1R Power Supply Platinum
  • Backplane: Supermicro BPN-SAS-825TQ 8-port 2U TQ (W/ AMI 9072)
  • Motherboard: Supermicro X9DR3-LN4F+
  • CPU: 2 x Intel Xeon E5-2630 V1 Hex (6) Core 2.3GHz
  • RAM: 32GB DDR3 ECC (8 x 4GB – DDR3 – REG)
  • Storage Controller: LSI 9210-8i 6 GB/S
  • Boot Pool: 2 x Kingston A400 120 GB SSD Mirrored (using motherboard SATA 6 GB/s)
  • Pool_1: 5 x WD Red 3 TB RAIDZ2 (CIFS and PROD VMware VMs)
  • Small_n_Slow Pool: 1 x Western Digital Blue WD3200AAKS 320GB and 2 x Seagate Barracuda 7200.10 300 GB (DEV VMware VMs)

I did not need dual Xeons running at up to 95W each. The dual 740W power supplies were not helping either. (Yes, I know that they don’t always draw 740W)

I really love TrueNAS. It has amazing flexibilty and stability. The first plan of attack was to see about swapping out components but even on the used market this would have been costly because, well, it is a server – you can’t easiy swap out the motherboard and put in a single processor with lower power requirements. And that doesn’t address the issue of the dual 740W power supplies. Even the RAM is relatively power hungry compared to modern components. So, that would not work.

I briefly thought about going to an iXsystems TrueNAS Mini but that was not inexpensive either and there is no rackmount model (yes, a little “vanity” on my part). The rackmount TrueNAS systems ain’t cheap either.

After additional thought, I started considering Synology (again). About 10 years ago, I purchased a Synology DS211j. A nice little unit that intoduced me to DSM. Over time, the ability to transcode was degraded by the march of technology. I then purchased a QNAP TS-219P II just to server as a media server. With newer releases of DSM, the perfomance of the 211j became painful. I then upgraded to a DS216+ II and retired the DS211j. Over time, the 1TB drives were upgraded to 2TB drives and then 3TB drives with the older drives being handed down to the older NASes.

Finally, the limitation of the two bays became evident as I increased the number of VMs I had. I had started using iSCSI to increase my storage space between the ESXi server(s) and the NASes. I was then on the quest to upgrade to something that was four-plus drive bays and was rack mountable. I looked at Synology but the four-bay unit was overpriced (in my opinion – yes, I know that rack mount adds about 25% or more for the price as the cooling requirements in a rack has implications and, well, they always seems to charge more for rack gear because they can…). After doing my research, particularly from information from Tom Lawrence of Lawrence Technology Solutions I decided on FreeNAS. After looking at price (including shipping) I ended up with the aforementioned Supermicro server. (Which came with rails. <rant>Why do used server vendors charge for rails? Average $100? What do they do with the ones they can’t sell? Argggg!!!</rant)>

So, I was back at what to replace the TrueNAS server with? Well, I’m back to Synology. A RS1221+ to be exact. It is a nice little box:

  • AMD Ryzen V1500B (64-bit, 4-core 2.2 GHz)
  • 4 GB DDR4 ECC SODIMM expandable to 32 GB (16 GB x 2)
  • 4x RJ-45 1GbE LAN Port
  • 1 x eSATA which I can use for the 4-bay RX418 expansion unit if needed
  • 1 x Gen3 x8 PCIe slot (x4 link)

The RX1221+ is not perfect. At this price level:

  • It should have 10GbE ports by now
  • It only has eSATA expansion for four additional drives – why no SAS or InfiniBand?
  • Only one PCIe slot – it has room for two
  • DSM’s approach to VLANs definately needs work. There is a command line hack to allow multiple VLANs to an interface but the GUI does not easily let you know which is which. Obviously , if you can do it on the command line it can be done in the GUI. This is Linux-based after all. I have a feature request in for that
  • NFS cannot be bound to a specific NIC (or subnet/VLAN). I had my TrueNAS NFS shares split off on a different interface to segregate it from the rest of the network traffic. I like the flexibility of NFS but I am now back to iSCSI. Some say that iSCSI is a little bit faster but it is nice to be able to go directly to the share without having to set up an iSCSI initiator on another device
  • No rails included (see previous rant)

I also added five new Seagate IronWolf 4TB drives. I’m not happy with the shenanigans Western Digital got up to with their Red line. Three of the old WD Red 3TB drives are moved from the old TrueNAS server to the RS1221+ as another pool (SHR2 for the 4TB pool, SRH for te 3TB pool) for backups, additional capacity if needed. The remaining two 3TB drives went to the RS216+ II. That leaves two spare 2TB WD Red drives. I’ll have to figure out what to do with them. Maybe I’ll sell them with the Supermicro server.

The second part of my upgrades was the start to redudancy. While we’ll likely be going back to the office (or maybe not) and classroom by the fall, work/learn-from-home is likely going to stay. And if you are at home you need to ensure that you have connectivity. I added a UniFi SW24 G2 switch as the new core switch leaving the old SW24 as a backup. I still need to figure out what to have as a redundacy for the pfSense firewall. I’d like something rack mounted (with the same number and type of NICs so that I can simply restore the configuration). These are items you can’t just drop down to BestBuy and pick up here.

Anyway, here is what the rack looks like now (all cleaned up):

Posted in Uncategorized | 1 Comment

OMG – Pelleys.com Was 20 in October

I just realized the I registered pelleys.com back on October 2, 2000….

Posted in Uncategorized | Leave a comment

New Camera for wx.pelleys.com

Back in August, August 23rd to be exact, the camera for wx.pelleys.com died in the afternoon. It was a nice warm day. The camera, a Panasonic TK-C750C, was about 10 years old. That’s a pretty good age given it is outside in an enclosure. The enclosure, actually the second one, has a heater and a fan for cooling, but in this environment (wind blown sea air) I’m not surprised failed. It looks like the sensor broke. The sea air actually croaded the chromed thumbscrews on the lens! The arrows point to the former silver thumbscrews.

binary comment

I started my search but I couldn’t decide on whether to get another analog camera or go with a higher definition IP-based camera. As noted in my previous post, my dad got ill in September and then passed away in October.

Finally, I got around to replacing the camera. The weather had to be decent (which we haven’t had for the past two week – rain, drizzle, fog and lots of wind) to replace it. I decided to cheap it out and went for another analog used camera. This time a Panasonic WV-CP484 with a 2.8-12mm lens. This lens has a slightly higher zoom than the old camera.

If I can get another 5 years out of this camera, I’ll be pleased.

Anyway wx.pelleys.com’s web cam is back up.

Posted in Uncategorized | Leave a comment