IBM's Alphaworks website has
released an Eclipse plugin to simplify the development of applications using Hadoop, the open source Java MapReduce framework. Hadoop which was originally created to support
Nutch includes a distributed filesystem and an implementation of the MapReduce programming structure used
extensively by Google for parallel processing of large data sets across a cluster. This year integration work has been performed to easily allow the
running of Hadoop MapReduce applications on Amazon's EC2 platform and use
Amazon's S3 platform for storage. The Amazon Web Services
blog notes: "Because bandwidth between EC2 instances and data stored in S3 is not metered or billed, this is a very cost-effective way to process large amounts of data.".
The IBM MapReduce plugin supports the following features:
- the ability to package and deploy a Java™ project as a JAR (Java Archive) file to a Hadoop server (local and remote)
- cheat sheets that assist with the development process
- a separate perspective with a view of Hadoop servers, the Hadoop distributed file system (DFS), and current job status
- wizards for facilitating the development of classes based on the MapReduce framework.
It also includes improved cheat sheets and full OS X compatibility. The plugin uses SCP and SSH to interact with Hadoop servers and HTTP to poll job statuses.