Virtual machine / disk image deployment?
April 4, 2008 5:00 PM Subscribe
Moving towards virtual machines for IT support at a business, and I have some questions.
I'm seeing some needs to move towards this field, but don't know where to start learning about it. I'd like to have a "quiver" of images/virtual machines that are at least the basic starting points, and have a way of working on them virtually, then deploying them onto heterogenous hardware (no way to get identical workstation hardware for all, so something like Ghost would never work).
I want to be able to image/virtualize the servers as a fail-safe measure (if a server physically dies, we can limp along with a virtual version piggybacked on some other server until that image can be deployed to a replacement). I'd like to do the same for workstations, having the ability to recofigure or deploy new software to a standard disk image/VM, then deploy that image to multiple workstations instead of doing it one-by one. It would help as a "clean install" every once in a while too.
1) assuming I have sufficient processor/RAM/other hardware, can I run 2 servers simultaneously (let's say a Server2003 server and an Exchange server) as virtual machines on the same piece of hardware, without any noticeable change in performance or extra maintenance/overhead?
2) can I (easily) take a disk image, turn it into a virtual machine, install programs, etc to it, then turn it back into a disk image that I can re-deploy?
I'd like to build a company "stock" image of the standard software, etc., then use Acronis True image to image it, keep that image as a virtual machine to maintain/update, then use Acronis SnapDeploy to load that stock image onto new/repaired PCs.
That must be doable, no?
3) Whats my best combo of software that plays nice? I'll want to virtualize Server2003 and XP Pro machines. I want to use Acronis to do my imaging, because their SnapDeploy product says it's capable of deploying a homogenous image to heterogeous hardware (so I don't have to have a separate image/VM for every different model PC in the organization). VMware seems like it's going to work best for working with the images themselves, and it doesn't seem to care if it's running on Linux or Windows,so I can save some dough by building a linux box just for managing this process.
Oh, and this is important - I'm interested in deploying disk images onto workstation HD's.
I'M NOT INTERESTED in having end users work using virtual machines. That would solve a lot of my problems, but it's just not going to happen in this organization.
Please tell me if I'm thinking about this all wrong, or if there's some other/simpler way to do this.
I'm seeing some needs to move towards this field, but don't know where to start learning about it. I'd like to have a "quiver" of images/virtual machines that are at least the basic starting points, and have a way of working on them virtually, then deploying them onto heterogenous hardware (no way to get identical workstation hardware for all, so something like Ghost would never work).
I want to be able to image/virtualize the servers as a fail-safe measure (if a server physically dies, we can limp along with a virtual version piggybacked on some other server until that image can be deployed to a replacement). I'd like to do the same for workstations, having the ability to recofigure or deploy new software to a standard disk image/VM, then deploy that image to multiple workstations instead of doing it one-by one. It would help as a "clean install" every once in a while too.
1) assuming I have sufficient processor/RAM/other hardware, can I run 2 servers simultaneously (let's say a Server2003 server and an Exchange server) as virtual machines on the same piece of hardware, without any noticeable change in performance or extra maintenance/overhead?
2) can I (easily) take a disk image, turn it into a virtual machine, install programs, etc to it, then turn it back into a disk image that I can re-deploy?
I'd like to build a company "stock" image of the standard software, etc., then use Acronis True image to image it, keep that image as a virtual machine to maintain/update, then use Acronis SnapDeploy to load that stock image onto new/repaired PCs.
That must be doable, no?
3) Whats my best combo of software that plays nice? I'll want to virtualize Server2003 and XP Pro machines. I want to use Acronis to do my imaging, because their SnapDeploy product says it's capable of deploying a homogenous image to heterogeous hardware (so I don't have to have a separate image/VM for every different model PC in the organization). VMware seems like it's going to work best for working with the images themselves, and it doesn't seem to care if it's running on Linux or Windows,so I can save some dough by building a linux box just for managing this process.
Oh, and this is important - I'm interested in deploying disk images onto workstation HD's.
I'M NOT INTERESTED in having end users work using virtual machines. That would solve a lot of my problems, but it's just not going to happen in this organization.
Please tell me if I'm thinking about this all wrong, or if there's some other/simpler way to do this.
My team builds our desktop images using VMWare and then we deploy them to all kinds of hardware (16 models at last count) but I haven't actually used any of the Acronis tools, as we use the imaging component of Novell ZENworks. I'm not sure if Acronis does this, but we build a baseline VM of XP Pro and then add-on images with software, patches and hardware specific drivers. We periodically update the add-ons, test them on the actual hardware to make sure they work, and add them to the virtual image.
The only real challenge in moving our virtually built image to actual hardware was dealing with the different HALs and we got around that by including them in the hardware specific add-ons.
posted by JaredSeth at 5:32 PM on April 4, 2008
The only real challenge in moving our virtually built image to actual hardware was dealing with the different HALs and we got around that by including them in the hardware specific add-ons.
posted by JaredSeth at 5:32 PM on April 4, 2008
I think it depends on what the servers are actually doing. My experience with these things makes me believe that virtualization costs a lot in hardware. The overhead and the realities of hardware will get you. Network, disk, memory bottlenecks. These are things that won't go away with virtualization.
I look at it like this- you can build a machine and run pretty much any number of services on it. But performance starts to suffer, so you split the services onto different machines. Now you are trying to put them back together, with an added layer of obfuscation in the middle. I just don't see the benefits in a server role. It seems like more of a niche problem solver.
Zen is the cat's ass, however. But there's a huge learning curve.
posted by gjc at 8:14 PM on April 4, 2008 [1 favorite]
I look at it like this- you can build a machine and run pretty much any number of services on it. But performance starts to suffer, so you split the services onto different machines. Now you are trying to put them back together, with an added layer of obfuscation in the middle. I just don't see the benefits in a server role. It seems like more of a niche problem solver.
Zen is the cat's ass, however. But there's a huge learning curve.
posted by gjc at 8:14 PM on April 4, 2008 [1 favorite]
Best answer: My experience is with VMWare, FYI. We use it pretty extensively where I work.
1. Yes, this is quite possible. It's trivial to run multiple virtual servers on a single host at the same time. You don't even need to buy anything to do this; VMWare "Server" (their free product) will do it for you. All you do is open up and start multiple VMs, each one appears as a different tab in the management console. You can configure how you want each one networked, independently. Options include bridged networking -- so that the server gets its own IP address from your network and looks like a separate box; NAT, so that a single IP address hides multiple VMs, appearing as a single server; or host-only -- the VMs can only talk to each other, not to the rest of the network, which is great for testing or building DB/middleware/webserver stacks where only one server should talk to the outside world. (The latter is also good when you clone a system and want to play with it without interfering with the production system that's still running elsewhere on your network.)
2. I believe so. VMWare can start up a virtual machine either from an "image" (basically the contents of a virtual machine's hard drive and RAM written to disk as files, stored on the host's filesystem), or from a bootable drive. I know people who have dual-boot Linux/Win32 machines, and also have the capability of booting up in one OS and then running the other OS off of its native partition in a virtual machine. I think this requires VMWare Workstation, not the free Server product, and it's considered an "Advanced Feature". Info here.
3. I would use VMWare Workstation and Server. (I'd use Server on your servers, with 64-bit Linux as the host on Xeons with lots of RAM -- you do *not* want your VMs to start hitting the host's swap! -- and Workstation for your workstation machine where you'll be building the images. For $129 Workstation is worth it, IMO.) It'll handle practically any x86 OS out there, including Linux, Windows, Solaris, BSD, DOS, OS/2, NetWare ... and if you're running on a 64-bit host, it will run them either in 64 or 32-bit mode.* I have no idea about using VMWare in combination with Acronis, though. VMWare workstation reads quite a wide variety of machine-image file formats -- it's not limited solely to reading .vmdk files -- but I don't know what features are supported on other formats or whether Acronis is one of them. A quick Google search reveals that you are not the first person to have thought of this; apparently some people have made it work.
* You need to have both a 64-bit processor and EITHER run a 64-bit host operating system, OR run a 32-bit host OS but use a processor that supports the "VT" extensions and have them enabled in your BIOS. I think.
posted by Kadin2048 at 11:15 PM on April 4, 2008 [2 favorites]
1. Yes, this is quite possible. It's trivial to run multiple virtual servers on a single host at the same time. You don't even need to buy anything to do this; VMWare "Server" (their free product) will do it for you. All you do is open up and start multiple VMs, each one appears as a different tab in the management console. You can configure how you want each one networked, independently. Options include bridged networking -- so that the server gets its own IP address from your network and looks like a separate box; NAT, so that a single IP address hides multiple VMs, appearing as a single server; or host-only -- the VMs can only talk to each other, not to the rest of the network, which is great for testing or building DB/middleware/webserver stacks where only one server should talk to the outside world. (The latter is also good when you clone a system and want to play with it without interfering with the production system that's still running elsewhere on your network.)
2. I believe so. VMWare can start up a virtual machine either from an "image" (basically the contents of a virtual machine's hard drive and RAM written to disk as files, stored on the host's filesystem), or from a bootable drive. I know people who have dual-boot Linux/Win32 machines, and also have the capability of booting up in one OS and then running the other OS off of its native partition in a virtual machine. I think this requires VMWare Workstation, not the free Server product, and it's considered an "Advanced Feature". Info here.
3. I would use VMWare Workstation and Server. (I'd use Server on your servers, with 64-bit Linux as the host on Xeons with lots of RAM -- you do *not* want your VMs to start hitting the host's swap! -- and Workstation for your workstation machine where you'll be building the images. For $129 Workstation is worth it, IMO.) It'll handle practically any x86 OS out there, including Linux, Windows, Solaris, BSD, DOS, OS/2, NetWare ... and if you're running on a 64-bit host, it will run them either in 64 or 32-bit mode.* I have no idea about using VMWare in combination with Acronis, though. VMWare workstation reads quite a wide variety of machine-image file formats -- it's not limited solely to reading .vmdk files -- but I don't know what features are supported on other formats or whether Acronis is one of them. A quick Google search reveals that you are not the first person to have thought of this; apparently some people have made it work.
* You need to have both a 64-bit processor and EITHER run a 64-bit host operating system, OR run a 32-bit host OS but use a processor that supports the "VT" extensions and have them enabled in your BIOS. I think.
posted by Kadin2048 at 11:15 PM on April 4, 2008 [2 favorites]
I look at it like this- you can build a machine and run pretty much any number of services on it. But performance starts to suffer, so you split the services onto different machines. Now you are trying to put them back together, with an added layer of obfuscation in the middle. I just don't see the benefits in a server role. It seems like more of a niche problem solver.
I've been thinking about using virtualization for certain applications in our shop, and have come to this same conclusion during my admittedly less-than-fully-informed analysis. I'd love to piggyback onto the question and hear folks with more specialized knowledge address this point.
posted by Roach at 6:20 AM on April 5, 2008
I've been thinking about using virtualization for certain applications in our shop, and have come to this same conclusion during my admittedly less-than-fully-informed analysis. I'd love to piggyback onto the question and hear folks with more specialized knowledge address this point.
posted by Roach at 6:20 AM on April 5, 2008
There's a tool called PowerConvert that will do some of the backup and imaging tasks you're looking for. It will work better than Acronis for doing backups - for example it will do block-based differencing to keep the VM images constantly up to date with the physical servers. The same company (PlateSpin) also sells a backup appliance that does 25-to-1 P2V backup/DR, so with enough resources you can definitely do 2-to-1. Platespin.
posted by GuyZero at 10:45 AM on April 5, 2008
posted by GuyZero at 10:45 AM on April 5, 2008
>> But performance starts to suffer, so you split the services onto different machines. Now you are trying to put them back together, with an added layer of obfuscation in the middle. I just don't see the benefits in a server role. It seems like more of a niche problem solver.
> I'd love to piggyback onto the question and hear folks with more specialized knowledge address this point.
I'll take a rough stab at it, and hope it's not too much of a derail.
There are a number of reasons you might want to virtualize. One of those reasons is not combining a bunch of servers that you split for "performance reasons," if those performance reasons still stand. If you decided to put your web server and DB server onto two different boxes in order to get your system to scale, you probably wouldn't collapse them back down to one box with VMWare, unless the box you're collapsing them down onto is more than twice as powerful as the old individual servers, plus room for VM overhead.
However, performance is not the only reason why you might want to split services onto different hosts. In fact, as machines get faster and faster, I doubt it's even the primary reason for a lot of people anymore. Having each service running in its own environment is pretty nice for security and management reasons as well. When your mailserver and your web server and your DB server each have their own OS instance to run happily in, you never have to worry about whether this next upgrade to your mail server is going to force an OS upgrade that's going to break the database, or trying to find one OS version/configuration that's supported for all your services.
With virtual servers, if you want to run your firewall on OBSD and your mailserver on Linux and your database on Windows 2003 Server, fine -- go for it. You can do that and only buy as many physical machines as you actually need for performance. (And really slick enterprise virtualization systems, like VMWare Enterprise, will actually take multiple physical machines and tie them together, letting you move virtualized servers onto new hardware as performance requirements dictate, without downtime. I hate to sound like a VMWare fanboy, but that's pretty slick. That's something you didn't used to see much outside of mainframes.)
In addition, putting each service on its own VM server allows you to achieve better security separation. Just because someone hacks your web server doesn't mean they'll have access to your entire database, or to your email server. (In reality, you shouldn't consider VMs quite as secure as separate physical machines; there's still a chance someone may figure out a way to 'jailbreak' the VM, although I don't know if this has been done in the wild. But it's not trivial, at least.) Doing this with separate physical machines can be expensive, especially if not all the machines are fully utilized. Virtualization lets you create VMs for every service without as much wasted resources, and it's easier from an admin perspective than using all the OS-specific tools for isolating multiple services running on the same machine.
And finally, even if you did originally separate services onto different physical machines for performance reasons, it's entirely possible if that decision was made a few years ago, that by now it may be cheaper just to get one big machine than pay the continuous running and support costs for multiple small servers. This is particularly true if your vendor starts ratcheting up the support costs as the hardware ages, but you haven't really outgrown it performance-wise (i.e. it would be wasteful to purchase multiple modern versions of the same class of server).
So anyway, those are sort of the three biggest arguments I've heard for virtualization of production systems: ease of management/administration, security, and maintenance cost. When you get into virtualization for development or test purposes, that's a whole different ball of wax. There, it's usually time-savings and clearance of desk clutter.
posted by Kadin2048 at 4:52 PM on April 5, 2008
> I'd love to piggyback onto the question and hear folks with more specialized knowledge address this point.
I'll take a rough stab at it, and hope it's not too much of a derail.
There are a number of reasons you might want to virtualize. One of those reasons is not combining a bunch of servers that you split for "performance reasons," if those performance reasons still stand. If you decided to put your web server and DB server onto two different boxes in order to get your system to scale, you probably wouldn't collapse them back down to one box with VMWare, unless the box you're collapsing them down onto is more than twice as powerful as the old individual servers, plus room for VM overhead.
However, performance is not the only reason why you might want to split services onto different hosts. In fact, as machines get faster and faster, I doubt it's even the primary reason for a lot of people anymore. Having each service running in its own environment is pretty nice for security and management reasons as well. When your mailserver and your web server and your DB server each have their own OS instance to run happily in, you never have to worry about whether this next upgrade to your mail server is going to force an OS upgrade that's going to break the database, or trying to find one OS version/configuration that's supported for all your services.
With virtual servers, if you want to run your firewall on OBSD and your mailserver on Linux and your database on Windows 2003 Server, fine -- go for it. You can do that and only buy as many physical machines as you actually need for performance. (And really slick enterprise virtualization systems, like VMWare Enterprise, will actually take multiple physical machines and tie them together, letting you move virtualized servers onto new hardware as performance requirements dictate, without downtime. I hate to sound like a VMWare fanboy, but that's pretty slick. That's something you didn't used to see much outside of mainframes.)
In addition, putting each service on its own VM server allows you to achieve better security separation. Just because someone hacks your web server doesn't mean they'll have access to your entire database, or to your email server. (In reality, you shouldn't consider VMs quite as secure as separate physical machines; there's still a chance someone may figure out a way to 'jailbreak' the VM, although I don't know if this has been done in the wild. But it's not trivial, at least.) Doing this with separate physical machines can be expensive, especially if not all the machines are fully utilized. Virtualization lets you create VMs for every service without as much wasted resources, and it's easier from an admin perspective than using all the OS-specific tools for isolating multiple services running on the same machine.
And finally, even if you did originally separate services onto different physical machines for performance reasons, it's entirely possible if that decision was made a few years ago, that by now it may be cheaper just to get one big machine than pay the continuous running and support costs for multiple small servers. This is particularly true if your vendor starts ratcheting up the support costs as the hardware ages, but you haven't really outgrown it performance-wise (i.e. it would be wasteful to purchase multiple modern versions of the same class of server).
So anyway, those are sort of the three biggest arguments I've heard for virtualization of production systems: ease of management/administration, security, and maintenance cost. When you get into virtualization for development or test purposes, that's a whole different ball of wax. There, it's usually time-savings and clearance of desk clutter.
posted by Kadin2048 at 4:52 PM on April 5, 2008
« Older When is a Cooper Mini not a Mini Cooper? | Tax lawyer to move to investing or general... Newer »
This thread is closed to new comments.
My experience is with Linux, not Windows, but in my experience yes.
2) can I (easily) take a disk image, turn it into a virtual machine, install programs, etc to it, then turn it back into a disk image that I can re-deploy?
At least with Xen, yes. But the storage requirements are kinda stiff. I'd keep mine on a USB drive. Although we don't use any special image software to archive/deploy the images, the images are already portable binary disk containers stored in a file on the drive. (In our case, a Fibre Channel NAS)
There's a big difference between virtual server instances and desktop imaging. For desktop imaging, you need to use something like Arconis. For server virtual images that you'll be running in a Xen Dom0 or similar, you don't need anything special at all... just enough storage to deploy a copy of your default image.
posted by SpecialK at 5:08 PM on April 4, 2008