At Tuenti, memcached is one of the things we can’t live without. We use it massively in order to serve the huge amount of traffic that leaves our servers. It is a great piece of software, but it has given us some headaches too.
The simplest solution for connecting to memcached servers is by using vanilla TCP connections. This works fine when you don’t have very many servers in your network. As the number of servers grows, however, you start to come up against a serious limitation: the number of concurrent TCP connections a memcached server can handle gracefully.
You can work out the number of connections needed for memcached traffic in your network by multiplying the number of memcached servers and the total number of PHP processes running on all servers. At Tuenti, we would currently run around 13,500,000 connections in total (an average of 300 processes per frontend server and around 90 memcache servers). That’s A Lot (TM). Each memcached server would have to deal with up to 150,000 open connections.
To make things worse, inside the memcached server every connection has its own set of buffers, so if you have many connections you will need a large amount of memory for buffers. In our setup we needed several GBs per memcached server just to store the data associated with connections.
Another problem is re-connections. Even when you use persistent connections, each time a PHP process is regenerated all the connections belonging to that process are torn down and set up again. The memcached servers can end up spending a lot of valuable processing time just handling re-connections.
The solution seemed pretty obvious: why don’t we use UDP? So we tried it. The memory consumption and re-connection issues were solved, but we experienced a performance drop in the memcached servers. We also saw a significant increase in system CPU time on the server graphs.
What was happening? A look at the memcached source code explained it all.
When you use TCP, every connection is assigned to a thread and all the packets belonging to that connection are processed by that thread. This approach is very efficient.
However, for UDP there is only one socket to handle all the packets. This socket doesn’t get assigned to just one thread - it is used by all the threads. That way, when a UDP packet is received, all the workers are woken up to handle it. Since only one of the threads is able to process it, the other ones waste time trying to read from the socket and then realizing there is no packet waiting for them.
We eventually developed a solution that mimics the way TCP behaves to some extent, where every packet is assigned to just one thread. This can be achieved by having several UDP sockets and assigning each socket to a different thread. However, in order to have different UDP sockets, we needed to have memcached listen on different ports.
We did this by adding an option to memcached that allows listening on a range of UDP ports instead just a single one. When that option is used, the server listens on all the ports that are specified and assign each port to a different worker thread.
We added the “
-N” option, which takes a number of ports as argument. For instance, this command:
$ memcached -U 11211 -N 4
launches a memcached server listening on ports 11211, 11212, 11213 and 11214.
The only thing left for this to work was to balance the incoming memcached traffic between the new range of ports. Our solution for that was to make each PHP server choose one of the available ports using a hash function based on the server id. This way the load is equally distributed between the ports, and thus between the worker threads in memcached servers.
This is the CPU graph for a memcached server using UDP with the vanilla 1.4.5 code:
This is the CPU graph for a memcached server using UDP with the multiport patch:
Both graphs have been rendered at the same time with the same amount of traffic. As you can see, the CPU usage with the multiport patch is a fraction of that used with the vanilla code.
Running some performance tests, we got the following numbers:
|Maximum GETs / second|
|UDP (vanilla 1.4.5)||59,900|
|UDP (multiport patch, 32 ports)||133,800|
If you are interested in the code for multiport support in memcached, you can look at/download the patch at github.