BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Architecting Twitter

Architecting Twitter

This item in japanese

The architecture underlying the very popular social application Twitter has been at the center of several discussions lately. Twitter had several instances of downtime and had turned off several popular features as the team tried to deal with the issues. What can be learned from looking at how Twitter tries to move forward?

Several people, including Om Malik and Dare Obasanjo, speculated about the underlying architecture of Twitter that led to the problems. More recently, Robert Scoble interviewed Twitter's Evan Williams and Biz Stone about things behind the scenes with the application and the company's future. The entire streaming video of the interview can be found on qik.

In the interview, Williams and Stone answered on of the big questions regarding Twitter's data architecture: Is Twitter using a Single Instance Storage (SIS) type of approach to user messages? At around the 13 minute mark in the interview, Williams talked about message storage and user timeline retrieval:
It doesn't do that [make a copy of the message for every user's follower], but that actually might be more efficient. Right now it goes into a database and when people want to get their timeline we construct the timeline out of the database and then, not every time, we then cache it in memory. But because things are written so often we are going to the database a lot just to update the cache. So there are lots of copies [of a message] in the cache but there's only one on disk. Our future architecture may be more like we're writing it many times because reading it will be a lot faster that way.
The possibility of moving away from an SIS message architecture opens the door to using data techniques like Data Sharding that are already popular with many high-volume sites and applications. Randy Shoup talked about ways that eBay architected their systems for high scalability, in part, by using sharding:
The more challenging problem arises at the database tier, since data is stateful by definition. Here we split (or "shard") the data horizontally along its primary access path. User data, for example, is currently divided over 20 hosts, with each host containing 1/20 of the users. As our numbers of users grow, and as the data we store for each user grows, we add more hosts, and subdivide the users further. Again, we use the same approach for items, for purchases, for accounts, etc. Different use cases use different schemes for partitioning the data
Bogdan Nicolau wrote an overview on the basics of database sharding. In the series, Bogdan discussed how to decide where and how to divide the data for an application. The main point in deciding was
What I’m trying to say is that no matter the logic you chose to split a table, always keep in mind that you want 0 join, order by or limit clauses which would require more than one table shards.
Bogdan moved on to the application side of using shards. Along with providing several code samples to go along with an example problem, Bogdan gave reasons for why they should work:
As you can see, the weight now sits in the writing part, as the mapping table must be populated. When reading, the splitting of the data algorithm involved is no longer a concern.
With several people involved in the discussions around how to scale Web 2.0, perhaps Twitter will continue to move towards a more stable, scalable architecture.

InfoQ has many resources on performance and scalability. Take a look at them here.

Rate this Article

Adoption
Style

BT