VMware vSphere 4 Environments

I was on furlough when I started this entry, so I was not supposed to think about work more than 4 days that week, and never in more than 8 hours in one day. So, a summary of what I’ve been doing the last few months…

New ESX Cluster

  • New Hardware and a new cluster – Housing has added 2 Dell PowerEdge R710s and an additional HP EVA 4400 SAN to be our “Production” virtualization environment. (Production is a bit of a misnomer, as we’ve had production class virtual machines for over a year.)
  • Upgrading the existing ESX 3.5 nodes to ESX 4 is on the schedule for later this semester or early in the summer. To make the reinstall easier, I’ve scripted out the installs just about as much as possible using some excellent examples: Cylindric.net and Ubiquitous Talk.
  • We are working on a fairly aggressive plan to virtualize or retire our remaining test and development physical servers, hopefully to be completed by mid-summer. This should allow us to retire another 5-10 2U Dell 2650, 2850 or 2950 servers. Any new systems we are bringing online are being virtualized unless the vendor refuses to support it.
  • I’ve reduced 5 partially populated racks down to 2 fully populated racks and another 2 less full racks. The key to increasing our density was installing 208V power and in-rack UPSes and PDUs. (Yes, it’s not as efficient as whole room UPS, but it’s better than the 110V solution we had before.) I’d like to repeat this work in our other “data center”, but I’m sure you’ve heard about the campus budget situation, so it’s on hold.

Hoopeston Schools

  • With the knowledge and scripting that I learned at the day job, I’ve started on upgrading our two ESX 3.5 nodes to ESX4. By leaving the upgraded node in Evaluation mode, I’m able to use Storage vMotion for migrating running VMs across storage locations.
  • Shared storage is currently powered by a pair of OpenFiler nodes in an HA/DRBD cluster. I followed a couple of excellent howtos: TheMesh.org and HowtoForge. I’m not completely happy with the underlying OS and package management that comes along with OpenFiler, so the plan is to reinstall the nodes with CentOS and recreate the cluster.

Virtualization Projects

So, I’ve been lax in writing here lately. Quite a bit of my writing energy has been going into documentation since both jobs are using wikis now. (I’d blame Twitter, but that seems too much like a cop out.)

But, I’ve been pretty busy at both jobs with virtualization and storage projects. Just a few of the highlights:

  • Housing
    • Our ESX node installs are now as close to fully automated as I want to make them, using Leo’s Ramblings as a starting point.
    • All our nodes are now running the latest and greatest ESX3.5u4+patches, thanks to enough excess capacity to empty one node at a time with vMotion and reinstall.
    • The HP EVA 4400 SAN has had a second Fibre Channel switch added and all the ESX nodes are now dual path to storage. The original plan for this SAN was to only have development and test level VMs, but production VMs came online once management gained confidence in virtualization and P2V conversions. Hence the need to add dual path support.
    • Some of those production VMs will involve adding “on demand” capacity to a web app that has usage peaks once or twice a year, so we’ll be adding a hardware Server Load Balancer to the mix as well. HTTP load balancing is easy, SSL support not as easy.
    • I’ve started to look at vSphere, but it’s not a pressing upgrade need for us.
  • Hoopeston
    • We’ve been transitioning to ESX since finding more affordable pricing through a state contract. As we retire VMware Server, we’ve been able to greatly increase VM density on the existing Dell PowerEdge 2900. And, we’ve purchased a new Dell R710 and it’s showing promise to get much higher density than the 2900.
    • Since ESX supports iSCSI, we’re investigating using Openfiler with DRBD and Linux HA as a storage option. (Some very good howtos are here, here and here.

Over the next few months, Housing is planning on purchasing an additional SAN for “production” workloads and continuing to virtualize anything that seems like a good idea. (And maybe a few things that are a bit of a stretch.)