BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Hypertable Lead Discusses Hadoop and Distributed Databases

Hypertable Lead Discusses Hadoop and Distributed Databases

This item in japanese

Databases have been gathering a significant amount of buzz lately. IBM recently invested in EnerpriseDB which supports a Cloud edition running on Amazon EC2. Amazon released their own cloud database late last year. Google's BigTable has also been studied by the community even though it is not open source. Continuing along these lines two open source projects, HBase and Hypertable, have leveraged the open source Map/Reduce platform Hadoop to provide Big Table inspired scalable database implementations. InfoQ sat down with Doug Judd, Principal Search Architect at Zvents, Inc. and Hypertable project founder, to discuss their implementation.

1. How would you describe Hypertable to someone first hearing about it?

Hypertable is an open source, high performance, scalable database, modeled after Google's Bigtable. Over the past several years, Google has built three key pieces of scalable computing infrastructure designed to run on clusters of commodity PCs. The first piece of infrastructure is the Google File System (GFS) which is a highly available filesystem that provides a global namespace. It achieves high availability by replicating file data inter-machine (and inter-rack), which makes it impervious to a whole class of hardware failures that traditional file and storage systems aren't, including failures of power supplies, memory, and network ports. The second piece of infrastructure is a computation framework called Map-Reduce that works closely with the GFS to allow you to efficiently process the massive amount of data that you have collected. The third piece of infrastructure is something called Bigtable, which is analogous to a traditional database. It allows you to organize massive amounts of data by some primary key and efficiently query the data. Hypertable is an open source implementation of Bigtable with improvements where we see fit.

If you run a website that sees high traffic volume, then you should care about scalable computing infrastructure. Your web server logs contain valuable information concerning user behavior on your site. You can run analytic calculations over this log data and use the results to provide a better service. It allows you to answer questions such as, "If a customer buys product X, what else are they likely to buy?" or "If a user lands on page Y, what is the average number of subsequent clicks they do on the site before terminating the session?"

2. Why did the team start the project?

The Engineering team at Zvents started this project because we recognized the value of data and data-driven engineering. We realized that at scale, the traditional tools for storing and processing information fall down. At the time that we started on the project, an open source Bigtable implementation did not exist, so we decided to build it ourselves. The reason that we chose open source is because we felt that an open source implementation of Bigtable would be inevitable. By leading the development effort, we would have a leg up on the competition in terms of knowledge, expertise, and credibility.

3. Does Hypertable require Hadoop to run?

No, Hypertable does not strictly require Hadoop to run. Hypertable is designed to run on top of an existing distributed file system like the one provided by Hadoop, HDFS. The interface to the underlying file system has been abstracted via a broker mechanism. Hypertable communicates with the underlying file system by speaking a standard protocol to a DFS broker process. This allows Hypertable to run on top of any file system, as long as a broker has been implemented for it. The primary DFS broker that we've been using is the one for HDFS, but we also have a broker for KFS, the Kosmos File System, and one called the "local broker" which just reads and writes data to and from a locally mounted filesystem. The "local broker" is one that we use for testing, but can also be used to run Hypertable on top of any distributed file system that is mountable via FUSE. As far as the rest of Hadoop goes, we intend to write an InputFormat class that will allow tables in Hypertable to be used as inputs to map-reduce jobs.

4. How does it compare to HBase and why not contribute to that project instead?

Hypertable differs from HBase in that it is a higher performance implementation of Bigtable (editors note, an interview from the HBase team will run in the coming weeks on InfoQ). I initially started working with Jim Kellerman and some members of the Hadoop team on the HBase effort. We had some differences of opinion on how it should be developed. In particular, we disagreed on the choice of implementation language. They insisted on Java, while I pushed for C++. That's when I forked and started the Hypertable project. The following document entitled, "Why We Chose C++ Over Java" gives a technical explanation of our decision to use C++:

http://code.google.com/p/hypertable/wiki/WhyWeChoseCppOverJava

Although we had a split early on, we're still friends with the HBase team. In fact, we all went out to lunch last week and had an enjoyable time trading implementation war stories and exchanging ideas.

5. In my novice exploration of Hadoop the M/R paradigm applies well to batch processing of data. How does Hadoop apply in a more transaction/single request based paradigm?

Map-reduce is definitely an offline batch processing system. It's great for sorting and sifting through massive amounts of information (e.g. log data), offline. Often, these large scale, offline computations will generate massive tables of information to be consulted by a live service to provide a better user experience. This information might include per-user profile data, or per-query statistical information. This is where Hypertable enters the picture. It provides a scalable solution to store data indexed by a primary key that can be queried online.

6. What has been the best thing you've found working with Hadoop?

It basically works. We've used it successfully at Zvents and many others have used it in an offline production setting. In fact, Yahoo recently migrated their WebMap processing onto Hadoop. They've described their recent accomplishment in the following press release:

http://developer.yahoo.com/blogs/hadoop/2008/02/yahoo-worlds-largest-production-hadoop.html

7. The worst?

I guess the worst part about working with Hadoop is that the project has been going on for years without a stable 1.0 release. At times, it's been a challenge to pin down a stable release. The project has gotten a lot more stable lately. The other problem that we've had with Hadoop is the lack of support for a 'sync' operation in their distributed filesystem. This makes it unsuitable for storing a commit log. They are actively working on this and should have it available by the time we release Hypertable 1.0.

8. When is 1.0 being targeted for?

We are shooting for a "beta" release sometime over the next month or two. The 1.0 release should follow soon thereafter. If you would like to be notified about the "beta" and 1.0 releases, please join the Hypertable announcement mailing list: http://groups.google.com/group/hypertable-announce

9. What companies are using Hypertable?

Given its "alpha" status, there are not yet any companies that have built their service on top of Hypertable. However, there has been a fair amount of interest and there have been over 3000 downloads of the software. We've also gotten great feedback from a wide variety of people on the mailing lists. One person from a company outside of Omaha, Nebraska reported a regression failure. After a bit of back and forth on the mailing list, he put his machine outside the firewall and gave me VNC access to debug the problem. I was able to isolate the problem, which turned out to be a timestamp conversion problem that was caused by running the software outside of the PST time zone. Another person who is a Ph.D. candidate at Stony Brook University in New York recreated Google's Bigtable benchmark test using Hypertable with promising results. This kind of community feedback has been a big help in solidifying the software. Last week we gave a Hypertable presentation to a group of thirty or so engineers at Facebook. They indicated that they have a need for this kind of infrastructure in their backend tool chain and would be willing giving it a try in a couple of months when the software is closer to being production ready. We expect to see real applications built on Hypertable around the time of the 1.0 release.

10. What does the future hold for Hypertable?

Near term, we're tightly focused on high availability. Specifically, server recovery. As soon as that is in place and well tested, the project will move into "beta" status with a 1.0 release sometime soon after that. We expect Hypertable to replace MySQL for many web service applications. We're working to replace the acronym LAMP ( see http://en.wikipedia.org/wiki/LAMP_(software_bundle)) with LAHP, the 'H' referring to Hypertable, of course. This means rock solid reliability and continued performance improvements. That should keep the team busy for the foreseeable future.

Doug will be giving a Hypertable presentation at the SDForum, Software Architecture & Modeling SIG on April 23rd in Palo Alto.

Rate this Article

Adoption
Style

BT