DTS - Handle the code

Posted on 5/05/2010 by Prem Gurbani Frontend Architect

Object oriented design allows us to handle and organize our code in ways that make it more manageable, scalable, reusable and easier to understand. Components can be built so they can be used by anybody without having to know their inner workings. In practice a component may be formed of an object or a collection of objects where all the data and code is encapsulated inside it. All that is needed is to provide an interface for others to be able to request the service that it's expected to fulfill. This provides highly reusable code and makes them substitutable by another version (such as upgrades or different suppliers). Effectively, you can glue together components to build up your functionality.

When designing modules it is important to address visibility and security issues on accessing object attributes and methods. This is important to ensure integrity and have a clear assignment of responsibilities. This introductory seminar describes the scoping concept applied to languages such as PHP and JavaScript as well as on how to handle your code into reusable modules.

[slideshare id=3869595&doc=dtss03e02handlingthecode-100427062503-phpapp01&w=600]

DTS - Typing system

Posted on 5/03/2010 by César Ortiz Architecture Engineer

The presentation starts with some basic theory on types. Later, different classifications for type systems are described, with the static/dynamic and strong/weak dimensions the ones we spend more time on. The third topic addressed is how the mix of polimorfism with covariance and descendant hiding affects the type system, and what new problems arise and have to be addressed by the type system. We finished the presentation with a discussion about pros and cons of the PHP type system.

[slideshare id=3869244&doc=dtss03e04typing-100427080016-phpapp02&w=600]

Scalability Talk at International Week of Technological Innovation

Posted on 4/19/2010 by Erik Schultink Chief Technical Officer

On Wednesday (21.04.2010), I'm giving a talk about scalability at Tuenti at "International Week of Technological Innovation", hosted by Universidad Europea de Madrid. In prepping that talk over the weekend, I put together some very interesting data about the work our team has done over the last 6 months here at Tuenti. This data shows the hard-won gains from months of the approach of partition, archive, optimize - then profile/monitor and repeat. I'm pretty proud of that work and want to highlight some of it.

As I'll speak about in my talk, I define scaling as maintaining acceptable performance under increasing amounts of load. I think of the performance of the system as graph like the one shown below:

The x-axis is request rate (e.g requests/sec); the y-axis is response time (ms). We care about the total throughput of the system - the total number of requests that can be served in a unit of time - while ensuring that every response is generated faster than some upper-bound on response time (dashed red line). You can think of this value as the "capacity" of the system as the point beyond which response time is above this threshold (ie performance is unacceptable) - the intersection of the red and blue lines. Scaling is moving this point farther and farther to the right, through actions such as optimizing, re-architecting, and adding infrastructure. All of these actions shift and re-shape the performance curve of the system - hopefully for the better.

What does that look like in practice? At Tuenti, we profile a portion of requests to our system and from that data, I produced the following performance curves:

Each curve in that graph is from a sample dataset, each of which was taken about 2 months apart. As in the theoretical graph, the x-axis is request-rate and the y-axis is average response time for those requests. For full disclosure, I scaled the data and excluded some outliers to get curves that overlaid nicely on each other - but these curves remain quite representative of the performance profiles of our system and, despite some extrapolation, don't mask any bottlenecks lurking within the range of the x-axis.

These curves tell a very interesting story. In October, our system was clearly much inferior to what it is today. Although that dataset is quite noisy, it is clear that performance degraded rapidly at a much lower range of request rates than in later months. I don't recall a particular bottleneck we were facing at that time - but likely it's explained by bumping into CPU and DB contention.

Two months later, in December, we had flattened this curve substantially. Although one could complain that some outliers at the left extreme are forcing a very generously fitting trendline, it's pretty clear that we had better performance in December at high request-rates than in October. Note that, interestingly enough, response time at lower load levels is actually worse in December than in October - we had traded about 10 ms in best-case performance for increased scalability, but that's a trade I'll take any day. Overall throughput of the system is more important that response time of a any request.

After another two months of work, in February, we had reclaimed that 10 ms while further flattening the curve. The dataset also looks much more stable, with less noise. In April, hard work brought response times down another 10 ms while maintaining a very healthy looking curve and stable dataset.

Overall, I think this graph gives a fantastic representation of 6-months of work scaling a Web 2.0 system - maintaining and improving performance, in the face of significant growth and new feature launches. Those response time figures are total - including CPU time rendering the page, as well as cache access and DB queries. Such work involves a lot of different teams: our backend scalability team of course, but also our backend framework and systems teams. And whatever optimizations those teams make, we still count on our product development teams to write new features in ways that don't abuse our frameworks, DBs, or CPUs.

Interested in pushing this curve farther? Check out jobs.tuenti.com.

Evolving a Backend framework

Posted on 4/16/2010 by César Ortiz Architecture Engineer

The duties of a Backend Software Architect at Tuenti include the maintenance and evolution of the Backend Framework. In this article we will talk about Tuenti's framework evolution, share its pros and cons and briefly introduce its features without entering into many technical or architectural details (as they will be covered in future articles).

Historical Review

The software that runs www.tuenti.com changes continuously with at least two code deployments per week. The scope of these releases vary, but usually we release a lot of small changes that touch many different parts of the system. Of course sometimes our projects are really big and their releases get divided and released in series to reduce overall complexity and minimize risk.

Identical approach is appllied to framework releases. Currently the modifications are mainly subtle, but introduction of a framework had to be divided into few phases with some of them also decomposed into smaller ones.

The original version Since its creation, the site runs over a lighttpd, mysql and PHP. From the first version, no third-party frameworks have been used and all the software has been developed in-house (for the good or bad).

The first version of the "lib" was quite primitive from an architectural point of view, since as a start-up the primary aim of Tuenti was to reach the public fast and then evolve once the product was proven successful.

The transitional version The transitional version was the one in place before we introduced the current framework. This code was using a framework built around the MVC pattern with a set of libraries supporting model definition and communication with storage devices (memcached and MySQL). At this point in time, the data partitioning was being introduced for both memcached and MySQL allowing Tuenti to scale much more effectively.

The use of memcached is very important for the performance of the site. When a feature was being implemented, the developer not only had to consider how the data is going to be partitioned in the database, he also had to decide what data is going to be cached in memcache, how the cache would work, and make sure that all interdependencies for data consistency are satisfied. The caching layer contained not only simple data structures, but also indexes, paging structures, etc.

The current version Currently, newly developed domain modules use the new backend framework (that exchanged the old model and supporting classes) and we are gradually migrating modules from the transitional framework to the new one.

We have also designed and developed a new front-end architecture which is still under evaluation and testing. In the following months we will be posting more information about the framework and implemented solutions, so please be patient.

Some of the most important advantages of the new framework are:

  • standardization of data containers,
  • transactional access to the storage (even for devices not supporting transactions),
  • complete abstraction of the data storage layer.

In addition to above, the framework is introducing several concepts, among which you'll find:

  • domain-driven development,
  • automatic handling and synchronization of 3 caching layers,
  • support for data migration, partitioning, replication,
  • automatic CRUD support for all domain entities,
  • object oriented access to data along with directly from containers (avoiding expensive instantiation of objects).

The framework is entirely coded in PHP and (so far) we have not moved any parts of the code into PHP extensions. This leaves us a lot of room for possible performance improvements but will reduce the flexibility of the code if we decide to make that step.

Selected framework features

A framework designed for a website like Tuenti has to address a lot of technical issues which you would not encounter in a standard website deployment. The problems arise on different fields: number of developers working on the project, scalability problems, the migration phases, and many more that appear as the site evolves over the time.

Although a deep explanation is out of scope in this article, let's briefly see the mentioned features.

Transactional access to the storage Systems using many storage devices require additional implementation effort to keep the data in a consistent state. We cannot completely avoid data inconsistencies (due to the delayed nature of some of the operations and failures), so we have to keep part of the consistency checks in the source code. Yet, we can minimize the impact and amount of problems in this area after implementing transaction handling within our application. This means that with more complex operations, that involve changes in several data sources we can keep a relatively high data consistency by implementing a design that defines "domain transactions" that relate to "storage transactions" assigned to different servers with different types of devices running on them.

This approach allows developers to focus on the logic and specific storage related cases, while the framework handles the transactions for most of standard operations automatically.

Complete abstraction of the data storage layer A central point for the storage layer is a "storage target name". These names are linked to several configuration data such as used storage devices, partitioning and/or replication schema, different authentication data, etc. In the domain layer, developers can write code focusing on logic and relations between domain entities and communicate with the storage layer as if it was one device (handling transactions as mentioned above).

This means that when there is a need to perform a data related operation, developers don't need to worry about all the device specific details, caching, etc. Everything is handled automatically so (in the most common case) the data will come from memcache; if it was already used while handling this request - it will already be cached in the framework; or if the data has not been used for a while - it will come from MySQL since the cache has expired.

Standardizaton of data containers Almost all data loaded into the system is stored in standard containers (DataContainer) that later are sub-classed to implement different logic for handling different types of data groups (Queue, Collection etc.). Implementation of standard containers allow us to integrate several features into the framework that not only speed-up development and reduce domain layer's complexity, but also apply system-wide security and unify data access interfaces.


Every architecture is designed with trade-offs in mind. This means that support of some of the architectural concerns is increased and for some decreased. In this case we have observed a higher memory consumption, bigger challenge in implementing particular performance related optimizations, and reduced flexibility of ways in which code can be implemented.

Currently developers have less freedom than when using the first version Tuenti's back-end framework. Previously a developer could just write any SQL statement he wanted and decide whether to cache the data or not and how that caching should work to the last detail. There was more flexibility but the process was prone to errors and produced a lot of duplicated code (read: copy+paste or waste of time). We still need to provide a way for developers to write complex SQL queries that cannot be generated by the framework automatically but these are just exception as regular queries executed in Tuenti are very simple.

As was already mentioned, higher memory consumption and more challenging implementation of optimizations are drawbacks associated to the use of a more complex framework. Both CPU and memory consumption are not considered problems when we're thinking about regular web requests. Standard response time was not affected in a noticable way, yet a glance at the back-office scripts execution statistics proves to us that there is still a lot of space for improvement in terms of memory usage and CPU consumption.

The root cause of higher memory consumption cannot be associated exclusively to the framework but also due to the fact that objects are cached in memory. Having a garbage collector is useless unless you release all references to objects. Caching is a very good solution to improve speed, but the code must provide ways to flush the cached data in order to make it usable in scripts that usually work on bigger amount of processed data then web requests do.

Evolution of the framework

A good framework design will allow for its evolution, but will define and enforce clear boundaries. Re-architecting the system is always a very difficult and expensive process, so one has to take into consideration all possible concerns (especially non-technical) and requirements defined for the system. It is also clear that the first version will never be the last one, so you need to be patient and listen to all of the feedback you're receiving.

Once you have a stable version of your framework you need to convince the developers that it really solves their needs and that it will make their lives easier. Having your developers "on board" has several advantages:

  • they will suggest improvements and anything else that they feel that is awkward,
  • remove the communication barrier that will block your framework from "reality",
  • speed up development process of the framework by streamlining ideas and effort.

When you are introducing a new framework, you also need to integrate it with the old one. This can be very hard and tricky. What you usually would like to do is to make the old framework use the new one. You need to maintain the old interface but run the new logic inside. Hopefully the old interfaces will make sense and you will not have to spend weeks trying to make "the magic" work in a technical world. You need to consider that the interface is not just the function signature and its arguments; you also have to respect the same error handling and influence of the old code on the environment.

As a framework developer you should never forget that the framework is there to help the people that are developing the functionality over it, however cool your framework is.

HackMeUp #7

Posted on 4/16/2010 by Andrzej Tucholka Lead Code Architect

Last Friday we have organized another of Tuenti's development feasts. In the atmosphere of great innovation most of our developers were coding things they find interesting, important and simply awesome.

At the end of the day everybody grabbed some snacks and favorite drinks and we had over one hour discussing and presenting results of our work.

The winner (by popular vote) project - Video Chat was implemented by Davide Mendolia and Sergio Cinos Rubio. Again, congratulations to both!

Davide Mendolia Frontend Engineer

We experimented with one of the new features of Adobe Flash Player 10.0. The feature allows us to do peer-to-peer connection for exchanging audio and video. The traditional streaming architecture is based on a client sending the content to a server (Flash Media Server) and then other clients connecting to it to receive the stream.

The new possibility introduced in 10.0, is to by-pass this bottleneck, by connecting directly to the client. How this is implemented? The two clients that wish to communicate with each other, connect first to a rendezvous server (Adobe Stratus), that generates a "Peer Id" for each of them.

This peer id should be exchanged between clients to be able to establish the connection. We decided to use our chat platform based on XMPP to send this information. When the first client decides to establish the communication with the second, we ask through our jabber interface using HMTL and Javascript, if the second client accepts the comunication and at the same time we provide the peer id of the first one. When accepted, the second client connects to the first one trough flash sending its Peer Id, and the two-way communication can start.

Links: Adobe Stratus

Other implemented projects included:

  • Photos in common - suggesting people that you might know basing on you being tagged on same photos as other people.
  • Where 2 - detecting users geoposition and suggesting places you might like and that are close to your current position.
  • Google gadgets for trac - a trac macro allowing integration with any google gadget.
  • Real time notifications - a real time notifications informing you about the activity in your friends network.
  • Local 2.0 - live visualization of what is going on in Tuenti Places, added some additional and really cool views showing interesting places that one should know about and more.
  • Tuenti in HipHop - a one day sprint to compile Tuenti using HipHop.


Follow us