DellWorld 2012 – Day 2

(My ears are still ringing from the awesome Camp Freddy performance last night, this post may be a little confusing.)

The morning started with a nice buffett style breakfast, laid out a little better than I’ve seen at other conferences, on par with Microsoft TechEd. It’s a small thing, but executing well on serving thousands of people in a relatively small room makes a conference better than average. I’ll try to get a picture tomorrow morning to show what I mean.
Continue reading

DellWorld 2012

So, I’m going to Austin, TX this week for DellWorld 2012, courtesy of Dell and the wonderful folks at DellTechCenter.com. Apparently I’ve been helpful and vocal enough about Dell servers, desktops and support on Twitter and other places that they are calling me an “influencer”.

As part of that, I’m required to state that Dell flew me in to Austin for #DellWorld 2012. And paid for my hotel. And my food. So, the things that I mention here over the next few days have been influenced by this, though I’ll try really hard to be honest and clear about everything I see.
Continue reading

HP Insight Remote Support and Windows Server 2008

(So that this gets into google and maybe someone else can learn from it.)

HP Insight Remote Support, when running on Windows Server 2008 has some irritating undocumented “features”.

  1. You need an entry in %systemroot%system32driversetchosts for the local machine and it’s IP address. If you don’t have this, the RSS Insight Admin Console will not be able to resolve the IP of the local machine and you will get “Unresolved” in the IP Address column of the Systems tab.
  2. Remote Support Software Manager does not like UAC. Since the Manager app is an hta application, the way around that is to Right click an Internet Explorer icon and select “Run as Administrator”, then browse to “C:Program Files (x86)HPCMRSSWMGUIswmui.hta” and open it.

More Handy SQL

To update an old post, here’s some SQL for listing out BIOS versions per model from WSUS 3:
SELECT COUNT([tbComputerTarget].[IPAddress]) as Count,
[tbComputerTargetDetail].[ComputerModel],
[tbComputerTargetDetail].[BiosVersion]
FROM [SUSDB].[dbo].[tbComputerTarget]
JOIN [SUSDB].[dbo].[tbComputerTargetDetail] on [tbComputerTarget].[TargetID] = [tbComputerTargetDetail].[TargetID]
group by [tbComputerTargetDetail].[ComputerModel], [tbComputerTargetDetail].[BiosVersion] order by [tbComputerTargetDetail].[ComputerModel]

VMware vSphere 4 Environments

I was on furlough when I started this entry, so I was not supposed to think about work more than 4 days that week, and never in more than 8 hours in one day. So, a summary of what I’ve been doing the last few months…

Housing
New ESX Cluster

  • New Hardware and a new cluster – Housing has added 2 Dell PowerEdge R710s and an additional HP EVA 4400 SAN to be our “Production” virtualization environment. (Production is a bit of a misnomer, as we’ve had production class virtual machines for over a year.)
  • Upgrading the existing ESX 3.5 nodes to ESX 4 is on the schedule for later this semester or early in the summer. To make the reinstall easier, I’ve scripted out the installs just about as much as possible using some excellent examples: Cylindric.net and Ubiquitous Talk.
  • We are working on a fairly aggressive plan to virtualize or retire our remaining test and development physical servers, hopefully to be completed by mid-summer. This should allow us to retire another 5-10 2U Dell 2650, 2850 or 2950 servers. Any new systems we are bringing online are being virtualized unless the vendor refuses to support it.
  • I’ve reduced 5 partially populated racks down to 2 fully populated racks and another 2 less full racks. The key to increasing our density was installing 208V power and in-rack UPSes and PDUs. (Yes, it’s not as efficient as whole room UPS, but it’s better than the 110V solution we had before.) I’d like to repeat this work in our other “data center”, but I’m sure you’ve heard about the campus budget situation, so it’s on hold.

Hoopeston Schools

  • With the knowledge and scripting that I learned at the day job, I’ve started on upgrading our two ESX 3.5 nodes to ESX4. By leaving the upgraded node in Evaluation mode, I’m able to use Storage vMotion for migrating running VMs across storage locations.
  • Shared storage is currently powered by a pair of OpenFiler nodes in an HA/DRBD cluster. I followed a couple of excellent howtos: TheMesh.org and HowtoForge. I’m not completely happy with the underlying OS and package management that comes along with OpenFiler, so the plan is to reinstall the nodes with CentOS and recreate the cluster.

Virtualization Projects

So, I’ve been lax in writing here lately. Quite a bit of my writing energy has been going into documentation since both jobs are using wikis now. (I’d blame Twitter, but that seems too much like a cop out.)

But, I’ve been pretty busy at both jobs with virtualization and storage projects. Just a few of the highlights:

  • Housing
    • Our ESX node installs are now as close to fully automated as I want to make them, using Leo’s Ramblings as a starting point.
    • All our nodes are now running the latest and greatest ESX3.5u4+patches, thanks to enough excess capacity to empty one node at a time with vMotion and reinstall.
    • The HP EVA 4400 SAN has had a second Fibre Channel switch added and all the ESX nodes are now dual path to storage. The original plan for this SAN was to only have development and test level VMs, but production VMs came online once management gained confidence in virtualization and P2V conversions. Hence the need to add dual path support.
    • Some of those production VMs will involve adding “on demand” capacity to a web app that has usage peaks once or twice a year, so we’ll be adding a hardware Server Load Balancer to the mix as well. HTTP load balancing is easy, SSL support not as easy.
    • I’ve started to look at vSphere, but it’s not a pressing upgrade need for us.
  • Hoopeston
    • We’ve been transitioning to ESX since finding more affordable pricing through a state contract. As we retire VMware Server, we’ve been able to greatly increase VM density on the existing Dell PowerEdge 2900. And, we’ve purchased a new Dell R710 and it’s showing promise to get much higher density than the 2900.
    • Since ESX supports iSCSI, we’re investigating using Openfiler with DRBD and Linux HA as a storage option. (Some very good howtos are here, here and here.

Over the next few months, Housing is planning on purchasing an additional SAN for “production” workloads and continuing to virtualize anything that seems like a good idea. (And maybe a few things that are a bit of a stretch.)