Twitter's Growing Pains

As every Twitter addict knows, the popular messaging system has suffered long and frustrating service outages, measured in hours or even tens of hours per month.

User complaints and defections to rival services have risen, and in recent months, some developers have even said that they've suspended work on their Twitter-related projects. But as these problems have crested, Twitter has been increasingly open about what executives say is a large-scale, ground-up architectural revamp aimed at improving the service's stability.

The company has avoided detailing timing or technical specifics. But cofounder Biz Stone said in an e-mail interview last week that the changes are under way, and that users should already be benefiting from the results.

"We are improving the system organically," Stone says. "We are already seeing correspondingly gradual improvements in stability as a result of our piece-by-piece improvements."

Twitter's service allows users to post 140-character messages online, and it lets other users receive these messages on cell phones or computers. According to the company's engineers, the service was originally built using "technologies and practices" suited to a content management system. Such systems generally use databases to organize content for publication, whether online or in print. The content management system used to produce this page, for example, has separate database entries for the story's title, the author's name, the date of publication, and the like. Although such systems allow users to create and revise content, they are not designed for real-time exchange of data.

But Twitter evolved quickly into a genuine communications network, with its own unique, fast-paced style of conversation and group messaging. The various hacks that Twitter's engineers used to turn a content management system into a messaging service haven't prevented persistent collapse.

The system's faults have been much dissected online. Twitter was originally written using the Ruby on Rails development framework, which provides considerable programming power and flexibility but is slow in interactions with a back-end database during heavy use. Nor was the system's original MySQL database structure well suited to handling the complex, fast-paced network of queries spawned by users "following" the updates of thousands of others.

The Twitter team's responses have been akin to triage at times: turning off features such as the ability to use the service through instant-messaging programs, or temporarily shutting down the "Reply" function, an extremely popular means of facilitating conversations between users.

The company has also periodically, and unpredictably, changed the frequency with which external applications can request Twitter data by using the company's application programming interface (API), a particular thorn in the side of developers who use this conduit for their own applications.

But after a particularly painful May and June, in which downtimes were unusually frequent and protracted, the company may be turning a corner.

Page
  • 1
  • |
  • 2
Join the Discussion
You are using an outdated version of Internet Explorer. Please click here to upgrade your browser in order to comment.
blog comments powered by Disqus
 
You Might Also Like...