I’m headed to Austin, TX next week for DellWorld 2013, courtesy of Dell and the wonderful folks at DellTechCenter.com. Apparently I’ve been helpful and vocal enough about Dell servers, desktops and support on Twitter and other places that they are calling me an “influencer” again this year.
As part of that, I’m required to state that Dell flew me in to Austin for #DellWorld2013. And paid for my hotel. And my food. So, the things that I mention here over the next few days have been influenced by this, though I’ll try really hard to be honest and clear about everything I see.
Kind of hard to believe it, but the ESX 4 cluster mentioned back in 2010 has essentially been the same since it was put in, except for the addition of a third, slightly newer node. And the Test/Dev cluster has also been stagnant since late 2008. But, the times they are a changing.
(My ears are still ringing from the awesome Camp Freddy performance last night, this post may be a little confusing.)
The morning started with a nice buffett style breakfast, laid out a little better than I’ve seen at other conferences, on par with Microsoft TechEd. It’s a small thing, but executing well on serving thousands of people in a relatively small room makes a conference better than average. I’ll try to get a picture tomorrow morning to show what I mean.
So, I’m going to Austin, TX this week for DellWorld 2012, courtesy of Dell and the wonderful folks at DellTechCenter.com. Apparently I’ve been helpful and vocal enough about Dell servers, desktops and support on Twitter and other places that they are calling me an “influencer”.
As part of that, I’m required to state that Dell flew me in to Austin for #DellWorld 2012. And paid for my hotel. And my food. So, the things that I mention here over the next few days have been influenced by this, though I’ll try really hard to be honest and clear about everything I see.
I was on furlough when I started this entry, so I was not supposed to think about work more than 4 days that week, and never in more than 8 hours in one day. So, a summary of what I’ve been doing the last few months…
- New Hardware and a new cluster – Housing has added 2 Dell PowerEdge R710s and an additional HP EVA 4400 SAN to be our “Production” virtualization environment. (Production is a bit of a misnomer, as we’ve had production class virtual machines for over a year.)
- Upgrading the existing ESX 3.5 nodes to ESX 4 is on the schedule for later this semester or early in the summer. To make the reinstall easier, I’ve scripted out the installs just about as much as possible using some excellent examples: Cylindric.net and Ubiquitous Talk.
- We are working on a fairly aggressive plan to virtualize or retire our remaining test and development physical servers, hopefully to be completed by mid-summer. This should allow us to retire another 5-10 2U Dell 2650, 2850 or 2950 servers. Any new systems we are bringing online are being virtualized unless the vendor refuses to support it.
- I’ve reduced 5 partially populated racks down to 2 fully populated racks and another 2 less full racks. The key to increasing our density was installing 208V power and in-rack UPSes and PDUs. (Yes, it’s not as efficient as whole room UPS, but it’s better than the 110V solution we had before.) I’d like to repeat this work in our other “data center”, but I’m sure you’ve heard about the campus budget situation, so it’s on hold.
- With the knowledge and scripting that I learned at the day job, I’ve started on upgrading our two ESX 3.5 nodes to ESX4. By leaving the upgraded node in Evaluation mode, I’m able to use Storage vMotion for migrating running VMs across storage locations.
- Shared storage is currently powered by a pair of OpenFiler nodes in an HA/DRBD cluster. I followed a couple of excellent howtos: TheMesh.org and HowtoForge. I’m not completely happy with the underlying OS and package management that comes along with OpenFiler, so the plan is to reinstall the nodes with CentOS and recreate the cluster.
So, I’ve been lax in writing here lately. Quite a bit of my writing energy has been going into documentation since both jobs are using wikis now. (I’d blame Twitter, but that seems too much like a cop out.)
But, I’ve been pretty busy at both jobs with virtualization and storage projects. Just a few of the highlights:
- Our ESX node installs are now as close to fully automated as I want to make them, using Leo’s Ramblings as a starting point.
- All our nodes are now running the latest and greatest ESX3.5u4+patches, thanks to enough excess capacity to empty one node at a time with vMotion and reinstall.
- The HP EVA 4400 SAN has had a second Fibre Channel switch added and all the ESX nodes are now dual path to storage. The original plan for this SAN was to only have development and test level VMs, but production VMs came online once management gained confidence in virtualization and P2V conversions. Hence the need to add dual path support.
- Some of those production VMs will involve adding “on demand” capacity to a web app that has usage peaks once or twice a year, so we’ll be adding a hardware Server Load Balancer to the mix as well. HTTP load balancing is easy, SSL support not as easy.
- I’ve started to look at vSphere, but it’s not a pressing upgrade need for us.
- We’ve been transitioning to ESX since finding more affordable pricing through a state contract. As we retire VMware Server, we’ve been able to greatly increase VM density on the existing Dell PowerEdge 2900. And, we’ve purchased a new Dell R710 and it’s showing promise to get much higher density than the 2900.
- Since ESX supports iSCSI, we’re investigating using Openfiler with DRBD and Linux HA as a storage option. (Some very good howtos are here, here and here.
Over the next few months, Housing is planning on purchasing an additional SAN for “production” workloads and continuing to virtualize anything that seems like a good idea. (And maybe a few things that are a bit of a stretch.)
Another Labor Day weekend, another year of Free, Hot Buttered Sweetcorn. There were a couple of improvements and changes made this year that worked out pretty well: corn was on pallets in “totes” in a reefer trailer, we had a nice donated tent over the husker for shade and we sold water and soda (in addition to the aluminum pans). Not so good was the lower total tonnage of corn that required some to go out early Monday morning and hand pick (cornjerk, if you will) another 1.2 tons so we could make it until 4pm.
And, I spent several hours at the school district Saturday night rolling the mail server over to new hardware. That’s a task I’m glad we only have to do every few years.
As part of the application/interview process with Housing, I was asked to give a 15 min presentation on a significant project I had been involved with, preferably one I had lead, including timeline, technical skills and lessons learned. I chose to talk about my experiences virtualizing hosts at Hoopeston Area Schools. Little did I know just how similar Housing was to where Jim and I were.
Your DNS May Be EOL
So, I’ve gotten 2 separate notes from 2 separate vendors over the last couple of days proclaiming similar things. Recently, ISC has declared several older versions of BIND “End of Life“. These older versions are no longer supported and may or may not have security issues. But, if your boss gets one of these, you can be sure that he/she will forward it on to the technical people out on the pointy end of the stick to answer for. I hope you don’t even have to think twice, you shouldn’t be running this stuff anymore.
I was going through some old mail archives and I found this:
Date: Tue, 29 Apr 1997 01:02:01 -0500
Subject: Auto Reset of ipfwadm stats
Tue Apr 29 01:02:01 CDT 1997
Stats for yesterday:
IP accounting rules
pkts bytes dir prot source destination ports
144 28693 i/o all ppp01.hoopeston.k12.il.us anywhere n/a
7336 2332K i/o all ppp02.hoopeston.k12.il.us anywhere n/a
0 0 i/o all ppp03.hoopeston.k12.il.us anywhere n/a
0 0 i/o all ppp04.hoopeston.k12.il.us anywhere n/a
That’s just the earliest mail I can find, it was running before April. Hard to believe we’ve had dialup running in Hoopeston for more than 10 years.