Rawashdeh notes that Twitter has been seeing a change in usage pattern over the last year from brief spikes (for example related to the clock striking midnight on New Year’s Eve or a celebrity pregnancy announcement) to more sustained peaks of traffic lasting several hours. This occurred, for example, during the Olympics closing ceremony, the NBA finals, and now with the election.
Part of the reason Twitter was able to sustain this level of traffic was down to a set of changes the company has been making to their infrastructure, including, as InfoQ previously reported, a gradual shift away from Ruby to a set of services written in a mixture of Java and Scala and running on the JVM.
The most recent reported change was for Twitter's mobile clients. Rawashdeh wrote
As part of our ongoing migration away from Ruby we've reconfigured the service so traffic from our mobile clients hits the Java Virtual Machine (JVM) stack, avoiding the Ruby stack altogether.
Twitter was at one time thought to be the largest Ruby on Rails shop in the world, and has made a substantial investment in its Ruby stack, going as far as developing its own generational garbage collector for Ruby called Kiji, which, unlike the standard Ruby collector, divides objects into generations and, on most cycles, will place only the objects of a subset of generations into the initial white (condemned) set.
In 2010, however the firm announced that it was shifting some of its development focus. For the front-end the firm followed the HTML5 trend of shifting rendering code into browser-based JavaScript, and, in so doing, it ceased to gain much benefit from Rails' templating model for building web pages. Then, citing both performance and code-encapsulation as drivers, the engineering team re-wrote both its back-end message queue and tweet storage engine in Scala.
Also in 2010, the search team at Twitter started to rewrite the search engine, changing search storage from MySQL to a version built on top of Lucene. Then in 2011 the engineering team announced that they had replaced the Ruby on Rails front-end for search with a Java server they called Blender. This resulted in a 3x drop in search latencies.
As a result of these changes Twitter's system stayed running without problems. "The bottom line: No matter when, where or how people use Twitter, we need to remain accessible 24/7, around the world," wrote Rawashdeh. "We’re hard at work delivering on that vision."