As part of the application/interview process with Housing, I was asked to give a 15 min presentation on a significant project I had been involved with, preferably one I had lead, including timeline, technical skills and lessons learned. I chose to talk about my experiences virtualizing hosts at Hoopeston Area Schools. Little did I know just how similar Housing was to where Jim and I were.
I ended up with 9 content slides, I’ll cover each one.
We had several goals for this project:
Reduce overall number of physical servers while segmenting tasks – Each building had multiple servers when a single properly configured virtual host could handle the load.
Retire older equipment instead of recycling – We have a habit of taking 5 year old servers that have been retired from their original task and repurposing them as lower end servers, no warranty older hardware means more likely to fail at the worst time.
Reduce OS costs – Using CentOS on the hosts instead of Windows saved us a small amount.
Preface: Things we could not afford
If any of this could be solved with money, remember we can’t afford: a SAN, VMware ESX or gigabit outside the datacenter. Most of the network is running on HP 4000M switches, 10/100 is good enough for most of our desktops.
Fewer Physical Servers
We had 3-6 servers or tasks at each building that should be logically separated for security reasons, but only one of those tasks needed dedicated hardware. So, we purchased some fairly beefy Dell PowerEdge 2900s and installed CentOS 4 and the free VMware Server product. Some example guest workloads are: Windows Domain Controllers, File/Print servers, Terminal Servers, RRAS/VPN boxes, Squid caching proxy servers, squirrelmail webmail system and a monitoring/Cacti host.
The whole project was open ended, individual servers were converted as new hardware was purchased.
The base OS install on the physical hosts needed to be fairly well secured, iptables to the rescue. The network configuration is also strange, the host is connected to the wire, but has no configured address listening. I also needed to install Dell OpenManage for hardware monitoring, luckily linux.dell.com has excellent instructions on that topic. The monitoring/Cacti host I mentioned above is used to graph the loads on both the physical hardware and the guest virtual machines, giving a useful baseline for troubleshooting.
We also needed a way to do something like “VMotion“, but, since we only have local storage on each host, the best we can do is moving virtual machine disk files around using rsync.
Patching a physical machine hosting multiple guests (potentially) impacts many more users than patching a single server, so additional planning and notification was needed for maintenance windows. But, putting many tasks on better hardware reduces the number of outages due to recycled, out of warranty hardware failures.
Also, other staff need to be able to install and support VMs, so best practices, lessons learned and general documentation became very important.
Windows Terminal Servers are memory, cpu and disk intensive when you put dozens of clients on a single host. We ran into the 3.6G per VM memory and the 2 vCPU limits in VMware Server fairly regularly, so we decided to migrate the Terminal Servers back to the physical hardware and run VMware server on Windows. Jim ordered several licenses of PlateSpin PowerConvert, we moved the VMs off to a temporary host, did the Virtual to Physical (V2P) conversion of the terminal server, then moved the other VMs back to the “new” physical host. And there was much rejoicing from the clients.
Most of the positives have already been covered: Reduced multiple physical machines to single hosts, retired old hardware, proved that V2P and P2V tools work. But one thing that was unexpected was the improved access to consoles. Because we could fire up the VMware Server console client and press the virtual power button, it became easier (and more comfortable) to do remote patching and troubleshooting.
I’m just going to list these off raw, most of them are self explanatory.
- Memory and/or disk intensive workloads should run directly on the hardware
- “Passthrough” SCSI support and Tape Libraries do not get along
- Clock issues: ntpd on the physical host, VMware tools guestd and no ntpd on VMs
- Historical graphing and good documentation of changes are must haves
- More ethernet interfaces. If you think you need 3, get 4 or 5.
I’m guessing the presentation went fairly well, since they hired me. And Housing has licenses for VMware ESXi, Virtual Center, VMotion and a functional SAN. New toys.