Wednesday 22 July 2009

Can you a have a purely virtual HyperV cluster?

Ok, I know I said that I would do a write up of a sandbox eval of Hyper V – but after building out part 1, I changed my mind and decided to have some fun trying out new eval software and different products in unusual configurations. My aim was to see if I could create a purely virtual HyperV cluster.

So I’ve dual booted my main PC and added Windows 7 64 RC, which from my limited time span of using it WILL be high on my “buy” list once released – even the wife likes it!! For me the fact that all the hardware drivers were installed straight away, it booted without issues and has so far installed all my addons/tools/extras without a hitch!! Saved the normal pain in man hours spent installing all the drivers, if you rebuild a lot for testing reasons this was a great bonus!

However its the features that I’m really enjoying are, in no particular order:-

  • compatibility, e.g. software not running – then try it under XP SP3 compatibility
  • snipping tool – takes a screen print which you can easily edit
  • sticky notes – virtual post-its that don’t drop off your screen after 24hrs like the physical ones do!
  • drivers for dual screen added in – no more messing with the ati system centre
  • libraries for certain files types from the toolbar

I know there are loads more features but they were enough to win me over on first use. Ok enough about Windows 7. Next on the list was Sun’s Virtualbox. Heard good things about this FREE product – and again I was impressed. Its good. It may not have all the extra features of VMware Workstation – but comparing the £80 price of VM workstation to Virtualbox – Sun’s offering wins hands down. It has all the features you’ll need for everyday eval testing.

So that’s Windows 7 for base PC OS, Virtualbox to host the VM’s, next was Windows 2008 R2 RC for HyperV R2. Ok that was on the original Sandbox lab, but I decided to try out VMware ESX4i hosting 2 Windows 2008 R2 Hyper V servers in a cluster – with the shared disks being hosted from local disks on the ESX4i server. If you’ve never built an ESX4i server before, there is not too much to, as there are no build configs you can change. Put disk in, let it build, and then configure IP settings and set Root password, - and your’ re done! Map to the “https://ip address” of the ESX4i server and install the Virtual client from the webpage.

Using ESX4i I wanted to see:-

Would HyperV run virtualised?

Would the 2008 CSV clustering work?

Would ESX over commit on memory allocation work with HyperV?

So installed ESX4I on 1 of my HP Proliant 110 boxes and hit the first snag. The Vsphere Client doesn’t work on Windows 7 – but with a little “googling” I found the solution on the VM forums. Nice work Ftubio!! With the VC client fixed – I built 1 Windows 2008 R2 DC VM on Virtual box from my PC, and 2 Windows 2008 HyperV R2 VM servers on 1 ESX4i HP ML110 following this VM KB. My physical server has 8GB of RAM – the 2 Windows 2008 servers were assigned 6gb each.

While building the HyperV cluster nodes, build the W2008 R2 DC using dcpromo from the run command. At this point I noticed an extra tab in AD Users Computers, for Personal Virtual Desktop. When the HyperV Servers are built I’ll test this out.

Once the HyperV servers are built, add the IP of your Domain controller as the DNS Server, then add the servers to your test Domain controller. Also ensure you select “allow remote desktop” option.

Once done shut down the HyperV VM’s and follow the VM KB to add the “lun” drives to the primary HyperV Node, and then the secondary node. ENSURE YOU SET THE SCSI PATH TO SCSI1:0 and above – you’ll need a second controller for the shared disks!

Q: quorum drive 512mb

H: Data drive- I assigned 50gb

J: data drive – again 50gb

One reason for using ESX4i is that the GUI now handles the creation of the shared virtual disks, before it was a command line process – thanks VM devs. While you create the SCSI bus paths, note down which drive uses which ID. (Tip – use the sticky notepad in Windows 7)

Once done, power on the 2 nodes and RDP onto the primary node. The next step was to install HyperV – but alas here the tale ends … when you install HyperV, MS checks the chipset and in this case returned than error saying the chipset/BIOS didn’t support Virtualisation.

A quick check of the VM BIOS confirmed this. Still no great loss as I needed an Exchange Cluster LAB to play with anyway!

But this leads me thinking could what other virtualisation software\Configuration could we use? Virtualbox was a no - see this link. Quick Check on Jose Barreto’s blog does indicate Hyperv can host a purely virtual cluster, but you have to use ISCSI shared storage. So I think I'll try nesting a HyperV Cluster on a HyperV host and check my configs/response/performance.

No comments:

Post a Comment