Amazon EC2 Gains Favor with JEE and Groovy Developers
Whilst the idea of Software as a Service (SaaS) is increasingly mainstream Hardware as a Service (HaaS) is still a new concept. An example of such a service is Amazon's Elastic Computer Cloud (EC2) which was announced in the summer of 2006. EC2 is a computing service based on a Xen hypervisor structure. It allows a developer to create Linux based virtual machines, either from scratch or using pre-built image files. Then, using either a web services API or a script-wrapper around that API, you can quickly deploy to any number of virtual machines.
The virtual machine structure allows Amazon to offer a variety of VM sizes up to the physical limits of the hardware. So in addition to the default small instance (1.7 GB of memory, 1 1.0-1.2 GHz 2007 Opteron/Xeon processor, 160 GB of instance storage, 32-bit platform) Amazon now offers large (7.5 GB of memory, 4 processors spec as before, 850 GB of instance storage, 64-bit platform) and extra large (15 GB of memory, 8 processors spec as before, 1690 GB of instance storage, 64-bit platform) instances. This mix and match approach is particularly useful if, for example, you need a heavy-weight VM for database processing with two lightweight VMs providing an application server service. Moreover VM images, which Amazon calls Amazon Machine Images (AMIs), can be archived and transported in a manner analogous to VMware's virtual appliances. This provides a mechanism for customers to post and publicly share AMIs either to jump-start a particular product or as a revenue stream where companies and individuals offer paid image files. A number of AMI files are already available for download including two for GigaSpaces and a Tomcat-based JEE environment.
The EC2 environment generally performs well and some initial tests by RightScale suggest that the network throughput for both EC2 and Amazon's Simple Storage Service (S3) is very high. EC2 does have some significant scalability limitations however. For one thing you are restricted in vertical scale terms by the largest VM size offered by Amazon. Secondly, for horizontal scalability you have to rely on software load-balancing techniques. These are generally more limited than their hardware equivalents since advanced techniques such as TCP buffering (where the load balancer buffers responses from the web server to send to slow clients, thereby freeing the web server to move on to other tasks) or SSL offload cannot be implemented. Furthermore since the EC2 environment is VM based you cannot use low level performance techniques such as kernel modifications or other OS level optimizations commonplace in heavy duty Linux environments. A larger VM based environment creates other challenges too since they can be difficult to set-up and maintain.
It is also worth noting that the costs rise sharply when using the larger VM instances for a 24/7 service - an extra large VM would cost over $7,000/year. However, the same pricing model makes EC2 very good value for both low-capacity web servers or for certain other situations such as batch processing where the service isn't required 24 hours/day, or testing where the service is only required for short elapsed periods of time. Oracle's Coherence Data Grid team, for example, make use of EC2 for testing purposes. Vice President of Development, Cameron Purdy, told InfoQ:
"Amazon’s EC2 is the only easy way to put a data center on your credit card.
For short duration, high-resource (i.e. many server) tasks, it can be quite useful and cost-effective. There is an investment in getting everything set up to use EC2 effectively, but if the task is going to be repeated many times, then it is a worthwhile investment. Since some of our engineers perform large-scale Data Grid testing as they develop product features, and since we can’t afford to run fifty dedicated servers for each engineer, EC2 is often the easiest and most cost-effective way to test our software.
EC2 allows you to start up a bunch of VM instances at once and we use that to start up a Data Grid. The easiest way to run a Data Grid on EC2 is to have each application use its own S3-bucket. Launching the EC2 image, the user supplies the application cluster name, which coincidentally is the name of the S3-bucket. The S3-bucket itself is then used to coordinate the bootstrapping process, with the nodes determining whether they are creating a new cluster (bootstrapping the Data Grid), or joining a cluster that is already running. To avoid the need for multicast discovery, the nodes determine a list of Well-Known Addresses (WKA), and the application – an executable .JAR file – is invoked with the WKA information passed in via the command-line. Since Coherence is easily embedded, the Coherence Data Grid library (a .JAR) is located inside the image itself.
One of the applications that we test on EC2 is part of a project code-named C0. Without going into details, the C0 Data Grid running on EC2 represents a single, giant pool of resources that can be dynamically allocated to different applications hosted on the Data Grid. We test this by building a VM image with the Coherence Provisioning Agent installed and started up as a service upon boot, with a watchdog to keep it running. The agent then joins the C0 cluster, and makes that particular VM available as a manageable resource to the Data Grid. Since EC2 is just a web service, and there are a number of Java libraries that can be used to invoke it, you can allocate servers from EC2 programmatically. Within a Capacity-On-Demand environment such as an auto-provisioning system, it’s possible to spin up VMs on demand and shut them down when demand falls. In our case, we drive the system using a rules engine that is itself a Data Grid application, and thus has the ability to spin up and shut down VMs as necessary – and specifically, it’s a Data Grid that automatically expands and contracts itself.
Finally, it’s necessary in a large-scale virtualized environment to even have each VM auto-configure itself, because it’s near impossible to manually configure and maintain a large number of VM images. For example, EC2 provides VM instance specific data, and we use that to configure things like the HTTP host and port for the Provisioning Agent Server, and also to access S3 as the bundle and code repository. In this manner, the only boot task is to spin up the VMs, because they self-configure rather than requiring parameters to be specified for each VM. One issue with this approach is that the Amazon AWS credentials for the account do need to be available to the VM so that it can access S3 to bootstrap itself; currently, this means that you may have to hard-wire the credentials into the EC2 boot image itself.
To summarize, EC2 opens up possibilities for systems that infrequently require a large number of servers, or that may need to dynamically increase or decrease the number of servers in an unpredictable manner. Today, we use it to test large-scale systems such as Data Grids, but tomorrow some of our customers may be using it as their production deployment platform. The industry often refers to EC2 as Software as a Service (SaaS), but it’s more than that – it’s a data center as a service. You too can put a data center on your credit card."
Another area where EC2 can prove cost-effective is for start-up companies. InfoQ Java Editor Ryan Slobojan recently worked on a project for a start-up called Jamloop which aggregates and geolocates new and used musical instruments. In a blog post he lists a number of reasons why EC2 was an attractive choice:
- "JamLoop didn't need to purchase expensive hosting space or hire any IT people, or take a risk with a cheap hosting site - Amazon is a big enough name that we felt we could trust them
- JamLoop can adapt to changing traffic patterns - if they suddenly get more popular or see a traffic spike, they can instantiate new EC2 instances on demand and still be paying just $0.10/hour/server
- If JamLoop has a normal traffic load which 20 servers can handle, and a peak that 100 servers can handle, they don't need to always have 100 servers - they can scale up and down when needed
- Since this site operates as an aggregator, it needs to import external data - JamLoop can spawn up some instances which do harvesting, keep them around for as long as needed, and shut them down when the task is complete.
- The cost is really low - $2.40 per day of server time works out to about $72/month per server, which seems like a good price especially given that there are no contracts and it's a pay-as-you-go model
- JamLoop can run any operating system or software that they want since these are their boxes - they aren't constrained to what a provider will set up for them e.g. Apache and PHP"
Using the EC2 API is straightforward, but to make life even simpler Chris Richardson has posted a Groovy framework that can launch MySQL, Apache HTTP Server, a set of Tomcat instances and JMeter, as well as deploying web applications to Amazon's EC2. The framework is still in very early stages of development, and isn't yet open source (although this is the intention) but it provides a useful way for Java developers to get up and playing with the technology very quickly.
Hate to sound like me too but
Re: Hate to sound like me too but
Re: Hate to sound like me too but
At some point, the interesting thing will be pre-built images that provide as simple admin as if they were on a local machine or within one's own data center.
In other words, being able to transparently use EC2 and other "cloud" providers as just one other place to "mount" applications, all via a simple HTML administrative UI.
Oracle Coherence: Data Grid for Java, .NET and C++
EC2Deploy and a Maven plugin are now available
The Missing Piece in Cloud Computing: Middleware Virtualiztion
There is an interesting article on Forbes magazine entitled The Death Of Hardware, quoting from this article:
The next revolution in high tech is taking place inside the "cloud" of the Internet. Small outfits looking to do lots of computing in a hurry are not buying hardware anymore; they're renting from established players that already operate vast networks of cheap computers. Time-sharing, a concept from the dawn of the computing age, is back with a vengeance
Applications that are looking for ways to deploy their application in a scale-out model can now do that with little operation cost.
Ideally running our application on the cloud should be as simple as running it on a single machine - to achieve this goal we still miss middleware virtualization piece as i noted in one of my recent posts:
The Missing Piece in Cloud Computing: Middleware Virtualiztion
This is the area where i expect to see the next wave of innovation in the middleware industry.
Anatole Tresch Mar 03, 2015