Hello, I'm the proud author of Tuenti Challenge 6's 6th problem, Through the Looking-Glass. Please hold your fire :

This is one of those infamous non-algorithmic problems with no apparent clues, which this year were featured quite early on in the Challenge. Its position may have been related to the fact that it was quite watered down from the original concept, but let's get to it:

The first thing you see upon reaching the problem is its information, or in this case, the lack thereof: there are only two images and some brief text.

If you inspect the html, you will see that one of the images is from wikipedia, and you can safely assume that there's no information hidden there.

The other image is a little more tricky; it's a QR with a strange color palette.

So, what's the first thing to do with a QR? You need to read it, of course. Use any mobile reader (or just ppm it and transform it, as it's easy to translate the brightness of colors to black/white) and it will take you to the wikipedia article about Piet Mondrian.

If you read that article carefully, you will notice the small reference to an esoteric programming language named after him, piet, whose programs are written as images with a reduced color palette that matches the QR's. You are on the right track!

(If you did not see the reference, a simple search in Google for "Piet Mondrian programming", "Piet Mondrian program" or "Piet Mondrian language" would take you to piet)

You jump into it, download a piet interpreter (or use an online one), run the QR and... it does not provide the expected output. It may well be problem number 6, but it was not going to be that easy!

So either it's not a piet after all or there are some quirks in the problem, so you return to the image and analyze it:

- It has a comment that reads "a piet program qr.randomized.flipped.embedded.withnl.png": This was a really big oversight on my part. I did not notice that my editor inserted the original filename and a comment. As it was not intended as a clue, I will continue this write-up without it (I think that's the reason it was featured so early).
- The areas of the QR are well defined... all of them except for the first two lines and the bottom-right corner, which is black and should be clear.
- The color histogram shows that there are only 20 colors, and each color appears in >2000 or <20 pixels, and the last only appear in the two first lines.

You should be certain at this point that you have to run a piet program, but why does it fail?

Reading more about piet, you see that black is a special color that is used to restrict and stop program flow, and that programs start in the top-left corner. In the QR, the top-left corner is black! In fact, all corners but the top-right are black, so the program must begin there! In hindsight, maybe I should have made the QR print something like “Not the solution” when interpreted right away.

So, how should you transform the QR so that the top-right corner becomes the top-left?

The title of the problem reads "Through the Looking-Glass", which is repeated in capitals in the description, and the fabulous wikipedia image depicts Alice looking into a mirror... it was always hidden in plain sight!

If you try to run the mirrored image through a piet interpreter, you will receive the correct output:

Why it's simply impassible!

I will finish with the full quote:

"Why it's simply impassible!

Alice: Why, don't you mean impossible?

Door: No, I do mean impassible. (chuckles) Nothing's impossible!"

- Lewis Carroll, Through the Looking-Glass, and What Alice Found There

I hope you enjoyed it as much as I did.

There were two earlier versions of the problem prior to the one that was finally released:

The first version had the piet program shared among each of the 4 corners of the QR, in both the original and the mirrored version, so, in order to get the full password, you had to run the QR 8 time: rotate it 4 times, mirror it and rotate it another 4 times, then compose the password with the outputs of the 8 runs. The QR was also going to give you a hint about the corners, but nothing about Piet Mondrian.

The second version had some 99% or 0% transparencies, just to mess with the interpreters, but it was still regarded as too difficult.

The third version is what you got in the challenge :)

As is the case each year, we have concluded the second phase of our programming contest with the 10 finalists joining us in our Madrid office to find out first-hand what Tuenti is and who we are.

First off, as promised, we broke the ice with a little party on the terrace on Thursday afternoon: there was no shortage of drinks, food and music. **The finalists got to see first-hand how we celebrate team achievements in the company ;)**

To conclude the final phase of the Tuenti Challenge, the finalists had to solve another mini-contest of 3 problems within two hours. As expected, they were up to the task and their ideas and code did not disappoint.

This was followed by a workshop with Suso B. on *Engineering at Tuenti* and the lunch break. In the afternoon, the finalists had several group and individual interviews in order to finalize the decision of the “committee of wise men” which would decide the final ranking.

Just an hour ago, we held the awards ceremony and announced the names of the winners. The podium is taken by:

Jakub H.

Eric M.

José A. O.

The following participants also reached this second phase (in alphabetical order):

Daniel E.

Endika G.

Jesús L. D.

Jonatan H.

Julian S.

Sergio L.

Taras S.

As always, we would like to thank all this year’s participants for the time, enthusiasm and hours spent trying to solve our problems.** Thank you very much from the entire engineering team at Tuenti.** And for all of you who missed out, we hope to see you next year in #TuentiChallenge7 :-)

8 Days

298,791 Lines of code

3,294 Attempts to solve a problem

2,577 Problems correctly solved

1,817 People registered

776 People solving at least one problem

776 people passed the first problem and 4 brave participants reached the final optimization problem and fought each other to obtain the best score. This year, the challenges were a little easier than usual, because the 20th challenge was a NP optimization problem, and we wanted people to have some time to try to reach a high score.

There were two stoppers --problems with less solvers than the next one--, problems 11 and 15. The problems with the lowest percentage of solvers among all the people that tried to solve them were problems 14 and 15, as you can see in the graph below. All in all, it seems that problem 15 was one of the toughest.

As usual, we’ve seen a lot of languages being used to solve the problems, 25 this year. We already talked about the most popular, but there are some interesting languages being used to solve only a couple of problems. Visual Basic, Rust and Objective C were used just in one solution, there were two solutions in F# and three in Ocaml. The following graph shows the evolution of the most used languages on every challenge (excluding some non algorithmic ones).

The shortest solution has been a cryptic but well crafted shell script with just one line and 142 characters, which recalls this twitter fight we had two years ago, searching for the shortest solution to a problem. The longest one is a solution for the 9th problem with 12163 lines, including some very long libraries for number factorization.

As usual, in this following graph you have the average number of lines per solution on most used languages.

We introduced a novelty in the last challenge: we created an optimization problem and a score system. An optimization problem is a NP complete problem in which having an optimal solution is too hard and thus the objective is to find a solution as near as possible to the optimal. This means that this challenge never ended and participants could have fun until the end of the contest and compete between them for the best score. We had 4 participants that reached this last level, you can see their score evolution in the following graph:

Kudos to all of them and specially to Jakub who was the best with a score of 34972. Good work!

This solution was quite close to the best solution that, without the time constraints of the contest, we obtained with a greedy algorithm (yes, we have to admit that we had some fun ourselves with this problem ;), which has a score of 35104. Using local search we managed to get a score of 36035. We will give more details of how we got this score in a future post. In the meantime… what’s the best score you can get?

The first round of the sixth edition of Tuenti Challenge ended this Tuesday at 13:37 hours, as scheduled. Of the 1817 people registered in this first phase, 781 of them were able to solve at least one problem and 756 of them were able to solve at least one of them correctly (both the test and submit phase).

You can see the full final ranking of the first round here.

This year, the first challenges were a little easier than in previous years, which means more problems solved and more people having fun \o/. What’s more, this year’s last challenge was something special. More info on this in our next posts!

There was very little change in the most used languages by the contestants in this edition, with Python, C++, Java and PHP being the most used languages, as you can see in the graph.

Tuenti Challenge is a competition where we can demonstrate our technical skills and which enables people with technical interests to have a good time while giving visibility to Tuenti as a company that is firmly based on technology in order to find especially qualified individuals for our team. The problems are a combination of programming, security and ingenuity challenges.

For those who wish to continue hacking or to complete any unfinished exercises, the webpage for the Challenge will remain available until next year ;) We would like to congratulate the top 30 ranked contenders and thank everyone for participating! Now it’s time for the second round. The excitement is building!

604,800 seconds, 226,246 lines of code —this is more than a new line every 3 seconds—, 2,342 participants, 2,395 problems solved —nearly a hundred lines per participant, and per solved problem too!—, 1,687 problems solved correctly… **but you want more details**.

839 people passed the first challenge, but only 1 made it to the 19th one. Nobody was able to solve the last 2 problems during the contest this year, making it probably the toughtest one so far. There were three stoppers —problems with less solvers that the next one—. Kudos to you if you were one of the participants that didn’t hit the skip button when you faced challenges 12, 13 and 15.

The number of correctly solved problems is not as big as the amount of solved problems, since most problems included several incorrect solutions. The problems with the biggest fail rate were challenges 12, 11 and 17 (this last one only had 2 submissions).

The average time to solve each problem varies from nearly 2.5 hours (problem 18), to nearly 29 hours (problem 17). The average time invested per problem has been around 12.5 hours.

Most used ones can be seen here, but in total **28 different programming languages** were used to solve, at least, one problem. The languages used to solve just one problem were AutoIt, AWK, Lua, OCaml, Pascal, TCL and even the esoteric language Aheui.

Not all languages were equally popular as the challenge evolved, as you can see in the following graph:

The shortest code —not considering those oneliners in problems that just echoed the solution— is a solution to the first problem in 2 lines of Python. The largest solution has a total number of 3396 lines of C# code, to solve problem 3, including external code that adds support for bignums.

Here they are the total number of lines by each language.

Out of the 28 programming languages, only 12 were used by the 50 winners of the first phase.

If you didn’t make it this year, you may want to upgrade your tool belt with some new fancy languages from this set!

An interesting metric to measure the efficiency of a language is the number of lines needed to solve a task. As this may vary largely due to both problems and solvers, we have measured this metric among the top 50 winners. You can see both total number of lines per language and the average lines of code per solution and language.