Server/Network Upgrade – Almost at the End… For Now

Well, I’m almost at the end of the server/network upgrade for now. Since the last update, here’s where we are from the last update:

  1. I rebuilt the network configuration (about 85% of it).
    1. I had a weird problem with DHCP requests. The DHCP address a client requested (except for the two WiFi VLANs/subnets) would not get an address assigned to the subnet. It seems through the evolution of my network I had assigned a subnet to a physical interface and a VLAN on the pfSense firewall. Since I’m not a network engineer by any sense of the term (with my day job I feel like I’m 1 mm deep and 1 km wide) so there may be some reason why you would do this. Anyway, I removed the VLAN and made that subnet my default “core” network.
    2. I think that, for some reason (maybe the issue above), the two UniFi Switch 8s (non-PoE) were funky with DHCP requests, VLAN assignments and connections between the switches (e.g., Switch 8 in my son’s gaming room to the Switch 8 in the TV room to the Switch 24 in the rack). Google University and the forums didn’t give me much help outside of the the suggestion that the configurations for the Switch 8s might be corrupt. I reset both the Switch 8s to factory configuration and deleted them from the UniFi controller. Once I did that I re-adopted the two Switch 8s into the UniFi controller and reconfigured the VLANs on the ports everything works just fine. Plus, the Switch 8 in my TV room could power the Switch 8 in the gaming room. (The Switch 8’s can be powered by PoE and one port can provide PoE. That was a nice bonus.)
    3. I added a UniFi Switch Flex Mini for the desk in the office. I can put VLANs on that small switch for testing, etc.
    4. The move to the Monoprice Cat6A SlimRun patch cables is almost completed in the rack. I only need to buy another pack of 1 foot (or 2 foot) patch cables. I ordered some 6 inch patch cables and only one was long enough to use. Dunh! I have 10 foot cables for the servers: yellow for FreeNAS (eventually to be TrueNAS), red for the HP DL360 G7 and orange for the DL360p Gen8 (more about that below). I bundled the servers patch cables into umbilicals. Connections to non-network/non-server devices are purple SlimRun patch cords (some more still need to replace a few older runs). White will be used in the rack for in-rack networking (e.g., those 6 inch ones).
  2. I bought an HP DL360p Gen8 with 2 x Xeon E5-2650 2.0Ghz 8-Core CPUs, 128 GB of HP SmartMemory (8 x 16GB) PC3-12800R (DDR3-1600) Registered ECC Memory, 4 x HP Enterprise 300 GB 6G SAS 15K SFF Hot Plug Hard Drives, HP Embedded Smart Array P420i/1GB FBWC RAID Controller and 2 HP power supplies. I also have iLO 4 Enhanced which allows for HTML5 remote console. The DL360 G7 only supports, now anyway, remote console under Windows with the HP iLO Integrated Remote Console application. I messed around a little bit with trying to get the Remote Console application to work under Mint with wine, but couldn’t get it to work. I didn’t mess around with it for too long, but that is a sign that remote management is on its way out. (The Supermicro Java iKVM app only works with Firefox and IcedTea – for now…).
  3. I installed VMware ESXi 7.0 on the Gen8 and moved the VMs over. The VMs are stored on the FreeNAS server using NFS. I’d advise using NFS rather than iSCSI as you can easily share the storage between endpoints (ESXi or mounting them from a workstation). The G7 is running ESXi 6.7 – a fresh install from the ESXi 6.0 install. The G7 is now used for testing and experimentation. When I’m not doing anything I turn it off.
  4. I had one 300 GB SATA drive in the FreeNAS box but remembered I had two more 320 GB SATA drives in the old QNAP 2-bay NAS. I deleted the 300 GB pool (the first one that added to the FreeNAS box when I was first setting it up) and added the two 320 GB drives (all 8 bays are now filled) and created a 600 GB “small and slow” RAID-5 pool for the G7 to run test VMs on. Given that it is only 600 GB it might actually force me to delete old VMs. 🙂
  5. I added another UPS, an APC BackUPS 1500. The older BackUPS XS 1300 was a little taxed with everything on it. The server power supplies are split between the two UPSes and the network gear is on the 1500.
  6. Finally, for aesthetics I added 1U filler plates. After looking at the price of the metal filler plates on Amazon and eBay (what $20/each!?!?!) I made some out of some backing board I had left over from the homemade rack. I learned that even with a circular saw I couldn’t cut a good straight line so I borrowed my neighbor’s table saw (Thanks, Phil!) and re-cut them. I still need to paint them black (at some time – I hate painting as I always make a mess…). Here’s what it looks like now (The two chassis on the bottom might be used for some additional SAS storage in the future but I mounted them to get them out of the way. They don’t look too bad there.):
Front of Rack
Back of Rack

About Mike Pelley

Let’s see… A little about me… I’ve been around information technology since 1983 with computers such as DEC Rainbows (weird machine – the standard DOS couldn’t format its own floppy disks – remember them? – and I had to format them on a friend’s IBM PC) to Radio Shack TRS-80 to Apple ][e and Apple //c in the beginning. I have programmed in 8-bit assembly language on 6502, FORTRAN and COBOL on IBM System/370 (and I still hate JCL), VAX BASIC and COBOL (and a weird and massive WordPerfect 4.0 macro) on DEC VMS (Alpha), C/C++ on Digital Unix (ALPHA), and C/C++, Perl (it may be powerful but I still hate it), PHP on Linux (Red Hat, Centos, Ubuntu, etc.). I have work with databases such as Digital RDB (later to become Oracle RDB), Oracle DBMS, Microsoft SQL Server, MySQL and PostgreSQL on VAX, Alpha, Sun and Intel. Check out my professional profile and connect with me on LinkedIn. See http://lnkd.in/nhTRZe I still think that Digital created some of the best ideas in the world: VAX clustering, DSSI disks (forerunner to SCSI) and the Alpha processor (first commercial 64-bit processor – Red Hat screamed on an Alpha!). DEC just could not seem to be able to give air conditioners away to someone lost in the Sahara Desert! VMware is one of the best ways to get the most out of an x64 server. And I have tried Oracle VM, Virtual Box and Microsoft Virtual Server. Outside of that I am a huge military history buff starting in the early 20th century. I love Ford Mustangs (my ’87 Mustang GT was awesome) and if I had the money I would have a Porsche 928S4. If I had a lot of money I would have a Porsche 911 Turbo. I also play too much AmrA 3 Exile mod. Over 5,000+ hours... I have a wonderful son, Cameron. I have a long suffering (Do you really need all that computer junk?) wife, Paula. I live in Paradise, Newfoundland and Labrador.
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Server/Network Upgrade – Almost at the End… For Now

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.