Devops is dead. Long live Devops

Posted on 1/23/2015 by Óscar San José, Principal SRE Engineer

Once again, Tuenti had the priviledge of hosting a meetup event in its offices! This time the guest star was Antonio Peña who came with a talk for the Madrid Devops group, called "DevOps 101". Antonio is a passionate advocate of the Devops culture, and this talk was intended as an introduction to the field.

He described the theoretical principles and practical advantages of the approach for those not knowing them. He continued stirring the controversy about the identity crisis and the "death of Devops", and gave us his particular point of view about this. He also talked us about some interesting thoughts by the most prominent international figures of the Devops scene, including famous names such as Seth Vargo and Katherine Daniels (you are already late if you are not following them on twitter yet: @SethVargo and @beerops), and how their ideas have not always been accepted easily by the community (including, in the case of Seth, death threats for his disruptive ideals!*).

He finished giving an overview regarding the Devops movement in Spain, and how any one of us developers can be a kickstarter of the devops culture here in our own jobs.

Next there was an interesting Q&A session, were participants were abel to express their doubts and hopes, and where some people, including Tuenti developers such as Pedro Gómez and myself were able to share their own experiences working on a Devops-loving environment.

Finally, the most important part of any meetup event: beers! Where a lot of interesting ideas were shared and discussed around an informal, yet thrilling, atmosphere. As you know, participation is the key, and highly encouraged!

As you already know, we are open to hosting other user groups meetings and talks so, if you’re interested in organising a tech or design-related event, get in touch! ;)

*As Antonio points out, Seth Vargo did not receive life threats for his opinions about devops, but rather fow his work as developer:

The Tuenti Release and Development Process: Conclusion and the Future

Posted on 1/14/2014 by Víctor García, DevOps Engineer

Blog post series: part 6 (the very last one)
You can read the previous post here.


There is nothing I can say other than the fact that all of this has been a great success for the company. The new process and all of the changes we introduced have improved developer throughput, has speed up the development workflow, has nearly achieved continuous integration and delivery, and in the end, we work better. The site and mobile apps have fewer bugs and overall, we’re very proud of what we’ve done:

  • Self service dev environments: disposable, ready to use, less support required
  • All manual processes automated: less time, fewer errors, more accounting
  • Well known processes: used in every project, less support required
  • Automated pull request: ensure testing and “ever green trunk” what in turn improves continuous integration and people integrate more often.

Buzzwords on the Internet and in Talks: Continuous Delivery/Integration, Agility, Automation...

Most of you have probably heard talks or read articles on the Internet about continuous delivery, continuous integration and all of these cool agile methodologies where everything relies on automation.

For the skeptics, here at Tuenti you can see that sooner or later, they pay off.

Not Following Them? Why Not?

Some of you may think that in your company, things are working just fine the way they are and that you don’t need to change them. You may be right, but what you might not know is that things could be even better. Why not always be ambitious and go for the very best?

Maybe you can think that changing your development workflows and the developer culture would take a lot of effort. Ok, it may not be straightforward, but it’s worth to try.

I’m not going to explain why doing smaller but more frequent releases is better, or why doing things manually is not the way to go, or why continuous integration reduces the number of bugs because you probably are tired of hearing the same again and again and because you can easily find that information by googling it.

How to Change

In newer, smaller companies, changing processes is not as hard as it is in bigger companies with old workflows and old people that are used to working with prehistoric methodologies.

Here are some tips that might help you make such changes in these more complicated settings:

  • Sell your idea by demonstrating success stories from other companies, clearly stating why change is better.
  • Start implementing something that makes the developers’ lives a bit easier.
  • Make small changes (Minimum Valuable Product). Bigger changes involve many ruptures in the process and developers won’t like them.
  • If they like your first MVP, you won’t have problems getting an opportunity to continue with those changes.
  • Developers will depend on you, be nice to them. Always be helpful and understanding when it comes to their problems. You can even do polls to find out what’s annoying them the most or what it is they need.
  • Automate everything. It ALWAYS pays off.

Transparency for Everyone

When someone in your company (techies, product managers, CTO, CFO... whoever) needs information are unable to get it, you have a big problem.

If everything is automated, it should be easy to gather information and show it company-wide; the release state, the next code that will be released, what that code does, what bugfixes it will include, who the author is, its risks, how to test it, etc.

Create a dashboard that everyone can see (a good design will help), so you can make them aware of, for example, when a new feature is going to be released. This way, for example, the communications/press team can make an announcement or the marketing people can prepare campaigns in time. Otherwise, they will ask you personally and in most of the cases, this is a waste of either your or their time.

And Now What? Next Steps and Projects

We automate almost everything but there are tons of things left to do and tons of way to improve/optimize the current tasks and operations we’ve automated.

Here is a list of some of the things we would still like to do:

  • Open source flow: Many people have requested this from us, and we are willing to do so but at the moment there are some parts of the code that are far too much coupled with Tuenti-specific things. We would need to do some refactoring before open sourcing it.
  • Speed up the pull request system: By testing more than one pull request at a time. For example, if there are two pull requests, A and B, the idea would be to test A and B together and also A only, if A+B is successful, Flow integrates two instead of one, if A+B fails and A is successful, Flow integrates one, if all of them fail, there is nothing to integrate, but it’s demonstrated that this case is the least frequent because in most of the cases the pull requests are successful.
  • Code risk analysis: Analyze this risk in the pull requests system so we can cherry pick pieces of code according to their risk and better plan releases. For example, we could do low risk releases only on Fridays.
  • Improve the development environment by virtualizing in different ways: Tuenti-in-a-box is not always enough to provide a full development environment because of resources consumption that is too high or because it’s not ready for other kinds of environments like Android or iOS developers. We have been thinking about improving the virtualization system, and instead of using VirtualBox, we would like to try some lightweight virtualization providers like LXC. It’s worth researching because we think there is room for great improvements here.

The Tuenti Release and Development Process: Deploying Configuration

Posted on 11/11/2013 by Víctor García, DevOps Engineer

Blog post series: part 5
You can read the previous post here.

Normally, website development differentiates between regular code and configuration code.

Having configuration in a separated code allows you to do quick and basic changes without touching the logic. These changes are safer because they should be just an integer value change, a boolean swap, a string substitution, etc. and don’t involve a full release process.

Some good practices have been described recently on the Internet about how to write your code in a flexible way to avoid useless releases, how to do A/B testing, about how to make your database changes backwards compatible, etc. and all of these good practices involved a good configuration system.

Following the DevOps Culture

Here at Tuenti, we are very fond of the DevOps culture and try to apply it as much as possible. We consider it the way to go and the way to do things efficiently, and it’s there is proof that it has helped us to improve quite a lot.

In our company, the configuration deployment is a clear example of DevOps culture. There is no manual intervention and there isn’t anyone who does deployments such as a release manager or operations guy. Every developer pushes his/her configuration changes to production on his/her own. Therefore, Devs are doing Ops tasks.

We use ConfigCop for that.


ConfigCop is a tool to deploy configuration to preproduction and production servers. Any developer can and must use it to, first, test their changes in a preproduction server and then, deploy them to production.

Deploy to Preproduction

Preproduction deployments logic are fully done on the client side and the basic options a developer can use is a configuration initialization and a configuration update, the latter being the one that uploads any configuration to be tested.

ConfigCop pull the latests code from Mercurial, gathers all configuration files and stages them all applying some overrides necessary to make it work in preproduction servers and also generating some .json files readable by HipHop.

Then, just deploy them using Rsync and the developer is ready to test.

Deploy to Production

A production deployment must be sequential and cannot be done in parallel because conflicts may arise.

Therefore, for this ConfigCop uses its server side. It’s basically a server-client communication over RPC that establishes a locking mechanism to perform sequential deployments. The lock is given to the developer currently deploying a configuration and it can’t be stolen.

Until the developer has finished a configuration change, another one can’t start.

The workflow is pretty simple:

  • Start a configuration change by just typing a command line.
  • ConfigCop checks if it’s unlocked. If it is, it deploys the configuration change to a preproduction testing server that will always have the same code as production.

    • This is important: we are assuring that the configuration change will work properly in production.
  • Notify the developer that s/he can start testing.
  • Once it’s tested, push it to production with a single command.
  • The deployment takes process and it finishes in less than 1 minute.


ConfigCop freed the release managers of doing these operational tasks and now developers with just 2 command lines are able to test and deploy configuration to production in a easy, reliable, and fast way.

Real Examples

  • Due to a network problem, some chat servers are down and the load in those that are alive is increasing very quickly, we will probably suffer an outage.

    • A developer can just do a config change to temporarily disable the chat to 10% of the users.
    • The problem is fixed in less than a minute.
  • A release requires changes in the database schema, the developer coded inserting in both tables to make it backwards compatible. The release is successfully deployed so the old schema can be deprecated.

    • A developer does a config change to make the code to insert to the new tables.
    • The developer does it on his/her own, nobody is bothered with such task.
  • The product team wants to test two different layouts (A/B testing) for the registration form and wants some users using the old one, and others using the new one so they can measure stats and choose the best one.

    • The developer will play with the percentages of users using A version or B version, and when the final decision is taken, set 100% to the chosen one.

The Tuenti Release and Development Process: Release Time!

Posted on 9/16/2013 by Víctor García, DevOps Engineer

Blog post series: part 4
You can read the previous post here.

Release Candidate Selection

So, we have made sure that the integration branch is good enough and every changeset is a potential release candidate. Therefore, the release branch selection is trivial, just pick the latest changeset.

The release manager is in charge of this. How does s/he do it? Using Flow and Jira.

Like the pull request system we talked about in the previous post, Jira orchestrates the release workflow, and Flow takes care of the logic and operations. So:

  1. The release manager creates a Jira “release” type ticket.
  2. To start the release, just transition the ticket to “Start”

    1. When this operation begins, Jira notifies Flow and the release start process begins.
  • This is what Flow does in the background:

    1. It creates a new branch from the latest integration branch changeset.
    2. Analyzes the release contents and gather the involved tickets, to link them to the release ticket (Jira supports tickets linking)
    3. It configures Jenkins to test the new branch.
    4. It adds everyone involved in the release as watchers (a Jira watcher will be notified in every ticket change) in the release ticket, so that all of them are aware of anything related.
    5. Sends an email notification with the release content to everyone in the company’s tech side.
    6. This process takes ~1 minute.

Building, Compiling and Testing the Release Branch

Once the release branch is selected, it is time to start testing it because there might be a last minute bug that escaped all of the previous tests and eluded our awesome QA team’s eyes.

Flow detects new commits done in release (this commits almost never occur) and builds, compiles and updates an alpha server dedicated for the release branch.


Why do we build the code? PHP is an interpreted language that doesn’t need to be built! Yes, it’s true, but we need to build other things:

  1. JavaScript and HTML code minimization
  2. Fetch libraries
  3. Static files versioning
  4. Generate translation binaries
  5. Build YUI libraries
  6. etc.


We also use HipHop, so we do have to compile PHP code.
The HipHop compilation for a huge code base like ours is a quite heavy operation. We get the full code compiled in about 5 minutes using a farm built with 6 servers. This farm is dedicated just to this purpose and the full compilation only takes about 5 - 6 minutes.


The code built and compiled is deployed to an alpha server for the release branch, and QA tests the code there. The testing is fast and not extensive. It’s basically a sanity test over the main features since Jenkins and the previous testings assure its quality. Furthermore, the error log is checked just in case anything fails silently but leaves an error trace.
This testing phase usually takes a few minutes and bugs are rarely found.
Furthermore, Jenkins also runs all of the automated tests, so we have even more assurance that no tests have been broken.

Staging Phase, the Users Test for Us

Staging is the last step step before the final production deployment. It consists of a handful of dedicated servers where the release branch code is deployed and thousands of real users transparently “test” it. We just need to keep an eye on the error log, the performance stats, and the servers monitors to see if any issue arises.

This step is quite important. New bugs are almost never found here, but the ones that are found are very hard to detect, so anything found here is is more than welcome, especially because those bugs are usually caused by a big amount of users browsing the site, a case that we can’t easily reproduce in an alpha or development environment.

Updating Website!

We are now sure that the release branch code is correct and bugs free. We are ready to deploy the release code to hundreds of frontends servers. The same built code we used for deploying to that alpha for the release branch will be used for production.

The Deployment: TuentiDeployer

The deployment is performed with a tool called TuentiDeployer. Every time we’ve mentioned a “deploy” within these blog posts, that deploy was done using this tool.

It is used across the entire company and for any type of deployment for any service or to any server uses it. It’s basically a smart wrapper over Rsync and WebDav that parallelizes and supports multiple and flexible configurations letting you deploy almost anything you want wherever you want.

The production deployment, of course, is also done with TuentiDeployer, and pushing code to hundreds of servers only takes 1 - 2 minutes (mostly depending on the network latency).

It performs different types of deployments:

  1. PHP code to some servers that does not support HipHop yet.
  2. Static files to the static servers.
  3. An alpha deployment to keep at least one alpha server with live code.
  4. HipHop code to most of the servers:

    1. Not  fully true, we can’t push a huge binary file to hundreds of servers.
    2. Instead, it only deploys a text file with the new HipHop binary version.
    3. The frontend servers have a daemon that detect this file has changed.
    4. If it changed, all servers get the binary file from the artifact server.

      • This file is there as a previous step, pushed after its compilation.
    5. Obviously, hundreds of servers getting a big file from an artifact server will shut it down, so there are a bunch of cache artifacts servers to fetch from and relieve the master one.

Finishing the Release

Release done! It can be safely merged into the live branch so that every developer can get it to work with the latest code. This is when Jira and Flow takes part again. The Jira ticket triggers all the automated process with just the click of a button.

The Jira release ticket is transitioned to the “Deployed” status and a process in Flow starts. This process:

  1. Merges the release branch to the live branch.
  2. Closes the release branch in Mercurial.
  3. Disables the Jenkins job that tested the release branch to avoid wasting resources.
  4. Notifies everyone by email that the release is already in production.
  5. Updates the Flow dashboards as the release is over.

Oh No!! We Have to Revert the Release!!

After deploying the release to production, we might detect there is something important really broken and we have no other choice but to revert the release because that feature cannot be disabled by config and the fix seems to be complex and won’t be ready in the short term.

No problem!! We store the code built compression of the last releases, so we just need to decompress it and do the production deployment.
The revert process takes only about 4 minutes and almost never takes place.

You can keep reading the next post here.

The Tuenti Release and Development Process: Development Branch Integration

Posted on 8/26/2013 by Víctor García, DevOps Engineer

Blog post series: part 3
You can read the previous post here.

In the previous blog post, we mentioned that one of the requisites a development branch must fulfill is a pull request. This is the only process in Tuenti to merge code to the integration branch.
We really want to have an evergreen integration branch that is always in a good state and with no tests failing. To accomplish that, we created a pull request system managed by a tool called Flow.

The Flow Pull Request System

Flow is a tool that fully orchestrates the whole integration, release and deployment process and offers dashboards to show everyone the release status, integration branch, pull requests results, etc. A full blog post will explain this, so here we’re focusing on branch integration.

Although Flow has the logic and performs the operations, every action is triggered through Jira.

Following the above diagram, here are the steps performed:

  • The developer creates an “integration” ticket in Jira. This ticket is considered a unique entity that represents a developer’s intention to integrate code, so the ticket must contain some information regarding the code that will be merged:

    • Branch contents
    • Author
    • QA member representative
    • Risks
    • Branch name
    • Mercurial revision to merge
  • Then, the ticket must be transitioned to the “Accepted” status by a QA member, so we can keep track and be assured that at least one of them has reviewed this branch.
  • To integrate the branch, the ticket must be transitioned to the “Pull request” status. This action will launch a pull request in Flow.

    • Flow and Jira are integrated and they “talk” in both directions. In this case, Jira sends a notification to Flow informing of a new pull request.
    • Additionally, in the background, Flow will gather the tickets involved in the branch and will link all of these tickets with the integration ticket using the Jira ticket relationships.
    • This provides a good overview of what it’s included in that branch to be merged.
  • Flow has a pull request queue, so they are processed sequentially.
  • Flow starts processing the pull request:

    • It creates a temporary branch with the merge of the current integration branch and the pull request branch that will be merged.
    • It configures Jenkins and triggers a build for that temporary branch to run all tests.
    • It waits until Jenkins has finished and when it’s done, Jenkins notifies Flow.

      • Jenkins executes all tests in 40 minutes, using a special and more parallelized configuration.
    • Then, it checks the Jenkins results and decides whether or not the branch can actually be merged to the integration branch.

      • If successful, it performs the merge, transitions the Jira ticket to “Integrated,” and starts with the next pull request.
      • Otherwise, the pull request is marked as failed and transitions the Jira ticket to “Rejected”.
    • Every operation Flow sends an email to the branch owner notifying him or her of the status of pull request and adds a Jira comment to the ticket.
  • If the pull request fails, Flow shows the reason for this in its dashboard and in the ticket (failed tests, merge conflicts), so the developer must:

    • Fix the problems
    • Change the Mercurial revision in the Jira ticket because the fix would require a new commit.
    • Transition the ticket to the pull request status again to launch a new pull request.

As you can see, this process requires minimum manual intervention, only clicking some Jira buttons is enough. It’s quite stable and more importantly, it lets the developers work on other projects while Flow is working, so they don’t have to worry about merges anymore. They just launch the pull request in their Jira ticket, and can forget about everything else.

So, this is how we assure the integration branch is always safe and that tests don’t fail. Therefore, every integration branch changeset is a potential release candidate.

It’s been proved that this model has improved the integration workflow, the developers performance throughput has increased so it can be considered as a total success.

You can keep reading the next post here.

Follow us