JavaScript Continuous Integration

Posted on 8/22/2013 by Miguel Ángel García, Test FW Engineer; Juan Ramírez, Software Engineer & Alberto Gragera, Software Engineer

Here at Tuenti, we believe that Continuous Integration is the way to go in order to release in a fast, reliable fashion, and we apply it to every single level of our stack. Talking about CI is the same as talking about testing, which is critical for the health of any serious project. The bigger the project is, the more important automatic testing becomes. There are three requirements for testing something automatically:

  • You should be able to run your tests in a single step.
  • Results must be collected in a way that is computer-understable (JSON, XML, YAML, it doesn’t matter which).
  • And of course, you need to have tests. Tests are written by programmers, and it is very important to give them an easy way to write them and a functioning environment in which to execute them.

This is how we tackled the problem.

Our Environment

We use the YUI as the main JS library in our frontend, so our JS-CI need to be YUI compliant. There are very few tools that allow a JS-CI, and we went for JSTestDriver at the beginning but quickly found two main problems:

  • There were a lot of problems between the YUI event system and the way that JSTestDriver retrieves the results.
  • The way tests are executed is not extremely human-friendly. We use a browser to work (and several to test our code in), so it would be nice to use the same interface to run JS tests. This discarded other tools like PhantomJS and other smoke runners.

Therefore, we had to build our own framework to execute tests, keeping all of the requirements in mind. Our efforts were focused on keeping it as simple as possible, and we decided to use the YUI testing libraries (and Sinon as mocking framework) because they allowed us to maintain each test properly isolated as well as proper reporting using XML or JSON and human-readable formats at the same time.

The Approach

This consisted in solving the simpler use case first (execute one test in a browser) and then iterate from there. A very simple PHP framework was put together to go through the JS module directories identifying the modules with tests (thanks to a basic naming convention) and execute them. The results were shown in a human-readable way and in an XML in-a-box.
PHP is required because we want to use a lot of the tools that we have already implemented in PHP related to YUI (dependency map generation server side, etc.).

After that, we wrote a client using Selenium/Webdriver so that the computer-readable results could be easily gathered, combined, and shown in report.

At this point in the project:

  • Developers could use the browser to run the test just by going to a predictable URL,
  • CI could use a command-line program to execute them automatically. Indeed, the results are automatically supported by Jenkins so we only needed to properly configure the jobs, telling our test framework to put the results in the correct folder and store them in the Jenkins job.

That was enough to allow us to execute them hundreds of times a day.


Another important aspect of testing is knowing the coverage of your codebase. Due to the nature of our codebase and how we divide everything into modules that work together, each module may have dependencies with others, despite the fact that our tests are meant to be designed per-module.

Most of the dependencies are mocked with SinonJS, but there are also tests that don’t. Since a test in a module can exercise another, we can decide to compute coverage “as integration tests” or with a per-module approach.

The first alternative would instrument all of the code (with YUI Test) before running the tests, so if a module is exercised, it may be due to a test that is meant to exercise another module. An approach like that may encourage developers to create acceptance tests instead of unit tests.

Since we wanted to encourage developers to create unit tests for every module, the decision was finally taken to compute the coverage with a per-module approach. This way, we instrument only the code of the module being tested and ignore the others.

To make this happen, we moved the run operation from the PHP framework to a JS runner because that allowed for some of the required operations to get the coverage as instrument and de-instrument the code before and after running the tests.

Once the test is executed, the final test coverage report is generated, so it would be unnecessary to integrate with our current Jenkins infrastructure and setup all the jobs to generate the report. Coverage generation requires about 50% more time than regular test execution, but it’s absolutely worth it.

To be Done

There is more work that can be done at this point, such as improving the performance by executing tests in parallel or force a minimal coverage for each module, but we haven't done this yet because we think that the coverage is only a measure of untested code and it doesn't ensure the quality of the tests.

It is much more important to have a testing culture than just get the metrics of the code.

The Tuenti Release and Development Process: The Development Environment and Workflow

Posted on 7/22/2013 by Víctor García, DevOps Engineer

Blog post series: part 2
You can read the previous post here.

The Development Environment

We maintain a couple of development environments: a shared one and a private self-hosted one, and we encourage developers to move to the private in order to ease and speed up the development workflow, getting rid of constraints and slowness caused by shared resources.

Both infrastructures provide a full “dev” and “test” sub-environments:

  • The dev has its own fake user data to play with while coding.
  • The test environment is basically a full environment with an empty database to be filled in with test fixtures able to run any kind of tests.
  1. Unit and integration tests run in the same machine the code does, just using PHPUnit.
  2. Browser tests run in a Windows virtualbox machine we provide with Webdriver configured to run tests in Firefox, Chrome and Internet Explorer.
These two sub-environments are under two different Nginx virtual hosts, so a developer can use both at the same time.

The Shared Environment
The shared environment are virtual machines in a VMWare ESXi environment. Each development team is assigned to a single virtual machine. The management and creation of them is pretty straightforward as they are fully provisioned with Puppet. They provide full dev and testing environments with its own Nginx server, Memcache  and Beanstalk instances, a HipHop PHP interpreter, MySQL database, etc. All these resources will be shared among all the virtual machine users. The development database, HBase, Sphinx and Jabber servers are hosted in a separated machine, so, these particular resources are shared by all users.

That’s why we’re currently moving everyone to the private environment. We’ve had problems with shared resources, such as when someone executes a heavy script that raises the CPU usage to 100%, makes the memory swaps, or slows the machine down due very high IO usage, sometimes even affecting users sharing the same physical machine.

The Private Environment: Tuenti-in-a-Box
Tuenti-in-a-box is the official name for this private environment. It’s a VirtualBox machine managed with Vagrant that runs on the developer’s personal laptop. The combination of Vagrant and Puppet is a fantastic thing to set this up since it let’s us provision virtual machines with Puppet very easily, every developer can launch as many private virtual machines as he wants just with a simple command. Thus, Tuenti-in-a-box is an autonomous machine that provides the above-mentioned full dev and test environments, ridding us of problems with shared resources. Every resource is within its virtual machine with one exception, the development database that is still being shared among all users.

Right now, project exists that lets every Tuenti-in-a-box user have its own database with a small and obfuscated subset of production data.

The Code Workflow Until Integration

Developers work within their own branches. Each branch can be shared among other developers. These branches must be up-to-date, so developers frequently update them with the integration branch, which is always considered to be safe and with no broken tests (we will see how this is achieved in the next post).

Each team organizes its development in the manner they want, but to integrate the code, some steps must be followed:

  • A code review must be carried out by your teammates and it must pass the review:
  1. In order to keep track of what a branch contains and for the sake of organization, every piece of code must be tied to a Jira ticket.
  2. Doing this lets us use Fisheye and Crucible to create code reviews.
  • The code is good enough to be deployed to an alpha server.
  • The code can’t break any tests, Jenkins will be in charge of executing all of them.
  • A QA team member tests the branch and gives it the thumbs up.
  • After all of that, a pull request is done and then the branch starts the integration process (more details about this in the next post).
Jenkins Takes Part!
At this stage, testing is one of the most important things. The development branches need to be tested. Giving feedback to developers while they are coding is useful for making them aware of the state of their branches.

Those branches can be tested in three different ways:

  • Development Jenkins builds that only run a subset of tests: faster and more frequent.
  1. ~14500 tests in ~10 minutes
  • Nightly Jenkins builds, that run all tests: slower, but nightly, so developers can have all tests feedback the next day.
  1. ~26500 tests in ~60 minutes
  • Preintegration Jenkins builds, that run all tests: slower but it's the final testing before merging to the integration branch.
  1. ~26500 tests in ~60 minutes
The creation of Jenkins builds is automated by scripts that anyone can execute by command line. No manual intervention in the interface is necessary.

Our Jenkins infrastructure has a master server with 22 slave servers. Each slave parallelizes the tests execution in 6 environments, and in order to be faster, a “pipeline mode” takes 6 slaves and make them run together to executes all tests. You might be interested in what the Jenkins architecture is like and how it executes and parallelizes such a large amount of tests in such a short time, but we’ll leave that for another blog post.

You can keep reading the next post here.

Follow us