BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Beyond Consolidation: Building a Better Development Environment with VMware

Beyond Consolidation: Building a Better Development Environment with VMware

This item in japanese

Bookmarks

In this article Mak King describes advanced application of virtualization that goes far beyond server consolidation. Learn about the techniques and benefits behind putting your developers on virtual servers, all proving that virtualization is far from limited to commonplace consolidation. Additionally, and completely without a desire to be bandwagoners, the approach described in Mak's article is clearly "green computing."

If you need Mak's advise or help in implementing this or a similar development environment please contact info@ShiftMETHOD.com

Introduction

Likely, if you have worked in IT at all during the past few years, you are aware of virtualization. The rallying cry from the rooftops about virtualization's benefits has almost become annoying. Adherents claim it is the greatest boon to IT of the past decade. Assertions are made that it frees up equipment, saves money, and basically does everything just shy of aligning the planets and feeding the starving masses. Is virtualization all it is cracked up to be? What are the real benefits after it is implemented? I found out with some real-world lessons by implementing virtualization in the development department at the company I work for, NYCE Payments Network, LLC, which is part of Metavante. NYCE is an electronics payments network that connects the accounts of over 89 million cards with more than 2 million ATMs and point-of-sale locations nationwide. NYCE provides consumers with secure, real-time access to their money anywhere, any time they need it. Thus, uptime, reliability and scalability are critical for any infrastructure we choose to implement.

My Background

Having worked in IT for about 12 years, I felt I had a pretty good range of experience in implementing new technologies to solve business problems. Of course, keeping up with technology developments is a challenge, and virtualization is no exception. My exposure to virtualization had pretty much been limited to trying it out on the desktop with a couple flavors of Linux, even though NYCE has had it in place elsewhere since 2001. During a directory services consolidation project last year, I was able to use VMware ESX with local storage (basically a single server with several redundant drives in the chassis to provide storage for the VMs (Virtual Machines), instead of using a SAN) to consolidate 3 outdated physical servers onto a single piece of hardware. Hey, that's pretty nice! Now I have a bunch of rack space free, can get rid of old equipment, and have a lot fewer cables to pick through! Using VMware for the consolidation project proved the effectiveness of using virtual machines on a small scale.

The Project

After catching my breath from the directory services consolidation project, it was time to turn my attention to our next VMware implementation: Development.

Working in and supporting a development environment has unique challenges not always seen in a production datacenter. First, the people you are working with are extremely tech-savvy by their very nature, so when something breaks, it is really broken. Furthermore, change happens quickly and needs to be accomodated to keep projects moving forward. Much of my job involves supporting developers and their computing needs, including maintaining a development lab, as well as some production equipment and the LAN. The lab was definitely the most difficult to keep a handle on. The equipment ranged from PC class devices, to workstations to servers with complete hardware redundancy. All the devices had varying amounts of memory, processing power and disk space, and individual UPSs to keep them protected from sporadic power fluctuations. Needless to say, hardware differences, the subsequent issue of driver compatibility, BIOS updates and support application compatibility led to a challenging environment for keeping things consistent, reliable and manageable. Also, each time we needed to test a particular application we were required to either procure new hardware (which of course never matched anything else) or reprovisioning existing equipment which could be a real pain due to BIOS updates being required for supporting newer OSs.

Redesigning a lab from one that is based on individual server devices to one that uses VMware Infrastructure and a SAN requires a different mindset. Moving to this new architecture allows you to move beyond the constraints of individual servers. The lab now becomes an amalgamated pool of resources that can be dynamically used and reallocated as needed, with redundancy needs defined as a set of rules applied to the resource pool. High Availability and Distributed Resource Scheduler are an intrinsic part of your design from the beginning, rather than as a separate, pie-in-the-sky feature. Sharp learning curve on this one! Evidently, thinking along these lines has yet to take hold in many virtual server deployments (I recently viewed a webcast entitled Advanced Virtualization Management, where Richard Jones, VP and Service Director Data Center Strategies, Burton Group noted that less than 10% of virtual server implementations had deployed high-availability in their environments). The majority of implementations are limited to server consolidation - 1 physical server migrated to 1 virtual server, with many virtual servers hosted on a single machine. Obviously, that will clean up the floor space in your data center, but it also produces a single point of failure; if your host machine goes down, you lose ALL of your virtual servers.

Deploying VMware

Once the 3 VMware host servers (preloaded with ESX 3.5) the SAN and the fiber channel switch showed up, it was time to get started. Installing and configuring the equipment was straightforward and I used a single APC Symmetra as a UPS. I did learn that working with fiber cables requires a lot more delicate handling than CAT 5 cable! Once the equipment was up and running, it was time to start configuring the environment.

The Virtual Infrastructure Client (VIC) is used for managing and configuring the VM environment. Using the client on my desktop in my nice, quiet, sunny cubicle is a huge improvement over spending days working on equipment next to the droning noise of the lab. Trust me, even the best earplugs and earmuffs are only so good against the the whine of so many fans. The VIC is very intuitive, allowing for the construction of clusters, templates, servers, virtual switches and the configuration of HA and DRS parameters. The VIC has earned the right to have multiple icons on my desktop. I opted to configure the 3 VMware hosts into a single cluster, and pool all resources into a root pool, rather than split it out by application or other criteria. The ease with which the infrastructure can be modified is mind blowing - it is great since it gives you the freedom to experiment with many possible configurations until you find the one that works best for your environment. Once the new environment was in place, it was time to implement the actual migration from physical servers to virtual servers. To do this, I used a combination of Converter (P2V) and server templates to recreate our existing environment. Of course, I always made a full tape backup of each machine prior to migrating it to our virtual environment, which was needed at least on one occasion. There were a few hiccups along the way, especially when trying to use Converter on a Windows 2000 Server device, but I was able to get around the issue. Sometimes this required building a new VM from scratch and then copying the data and applications over; the upside is that it allowed me to clean up the server during the process.

Where We Came From and Where We Are Now

Here is what part of our development lab consisted of prior to implementing VMware. Of course, the cabling took a lot of management, and the workstations lacked hardware redundancy for the most part.

After installing VMware and converting the physical servers to virtual servers (P2V) in some cases, and building replacement VMs for physical servers in other cases, our lab now looks like this:

The lab is now much more redundant, has High Availability configured, and each server's resources are no longer bounded by the physical hardware immediately available to it. By pooling the resources of the 3 host servers, VMs can be dynamically updated with whatever resources they may need. Furthermore, I was able to open up for reprovisioning 4 flat panel LCDs, since access to the VMs is all via remote clients. In net, I reduced the footprint of the lab equipment by 85%. The lab also stays much cooler now, without even turning up the air conditioning.

Testing High Availability

Now that the cluster was configured for High Availability, it was time to see if it could actually work as advertised.

Our environment at this point consisted of 12 VMs powered on and running on the cluster of 3 hosts (A, B, C). Using VMotion, 11l VMs were migrated onto hosts B and C, leaving a single Windows 2003 server running on host A. Then I did what would usually be considered a very bad idea - I pulled the power cords out of both redundant power supplies from host A. According to the plan, VMotion HA would detect the failure of host A and automatically restart the affected VM on hosts with available resources. But would it work? After less than 2 minutes, the VM on host A was up and running on host B and ready to be logged into. Next, I duplicated the test using a Windows 2000 server running on host A. The same result was observed: the VM was automatically restarted on another host within 2 minutes. After proving that functionality of HA, I then powered up host A, reestablishing the cluster's redundancy, and VMotioned the VMs back to their original hosts. The confidence that this gave me for using VMware in our environment was off-the-chart.

Benefits

Now that the lab has been up and running with VMware for a few months, the benefits are starting to become apparent. Here are some of the more obvious ones:

  1. The lab is much easier to manage. Going from 13 disparate devices to 3 identical devices has made it much simpler and easier. Getting rid of KVMs, LCDs, small UPSs and all the cables that go along with them has allowed me to shrink the space used in the lab by 85%.
  2. Flexibility is greatly enhanced. For example, we were able to determine the optimum amount of resources (both RAM and number of processors) for particular database queries by adjusting the number of resources allocated to a VM, and repeating test jobs until we hit a point of diminishing return. Doing that with physical hardware would have been a nightmare not only with hardware cost but also with time.
  3. Network throughput has increased. Previously, the network throughput of each device was limited to the number of nic ports it contained - in most cases just a single one. Now that each ESX host contains multiple nics, each with several ports, these are teamed for doubling the available bandwidth, as well as providing network redundancy in case of nic failure. Due to this configuration, I have also seen a reduction in the time it takes for backups.
  4. The use of HA, DRS and VMotion has greatly improved our redundancy and uptime.

Current Status

NYCE's Development environment has now been migrated to VMware, and I couldn't be happier with the result. The development environment is far more flexible: we can deploy servers within minutes instead of weeks, and reallocate resources on the fly. The use of VMotion with HA and DRS has improved our uptime and confidence levels in keeping our environment available 24/7. The business side is also happy, since we have been able to reduce on site maintenance support costs from over a dozen machines to just 3. Overall, it has been a very positive and beneficial experience and I look forward to using VMware in other environments within the Company.

About the Author

Mak King is Sr. Business Systems Engineer at NYCE Payments Network, LLC and looks forward to your comments. Mak can be reached via: akamak at gmail dot com

Disclaimer

The information, views, and opinions expressed in this article are solely those of the author and do not represent the research, views or opinions of NYCE Payments Network, LLC. NYCE Payments Network, LLC is not responsible and cannot be held accountable for any information, views, or opinions expressed in this article. The author takes full responsibility for the information, views, and opinions presented.

Rate this Article

Adoption
Style

BT