Solving Gridflip

July 05, 2007 at 05:45 PM | Play, Facebook | View Comments

Ted and I had so much fun solving the last Facebook problem that we decided to do another this weekend: gridflip.

The problem is simple: you are given a grid of numbers, some positive and some negative. The goal is to minimize the sum of the grid, while keeping the sums of the individual rows and columns positive. The only operation that can be performed is flipping the sign of the individual rows of columns. For instance, given the grid:

 [  2  -3 ] -1
 [ -3   5 ]  2
   -1   2    2

A flip to the first row and the first column would result in the grid:

 [ -2   3 ]  1
 [ -3   5 ]  2 # After flipping the first row
   -5   8    6

 [  2   3 ]  5
 [  3   5 ]  8 # After flipping the first column
    5   8   26

This is the best solution for this grid, because it is the only solution where the sums of all the rows and columns are positive.

The solution we came up with for this problem was almost as simple: for every possible combination of flipped/not flipped columns1, make sure that every row is positive. If, after making the sum of every row positive, the sum of every column is positive, the grid is a contender for the smallest sum.

Now, there is one small problem: rows which have a sum of zero. If a row's sum is zero, a flip won't change the value of the row, but it will change the value of the columns. The obvious solution is simply to keep track of which rows have a sum of zero, then try flipping them... And that's exactly what we did.

Now, the biggest downside to this algorithm is that it runs in O(2n) where n is the shortest side of the grid... But in practice, even my less-than-optimal Python code could solve the "large" data set in less than three seconds (Ted's C# code was almost instant).

It makes me wonder, though, is there any way to speed this up?

See /code/facebook/gridflip/ for source code and test grids.

1: Columns for simplicity -- the actual code makes sure there are fewer columns than rows by transposing the grid.

Edit: So, it appears that I can't add. I've fixed up the example grids so that they actually make sense.

Edit by Ted: I'd like to try to clarify our algorithm a little since the explanation is crappy. Let's say we have a 10 x 20 grid. First we chose the smallest dimension which in this example is 10. Then we enumerate all possible ways of the 10 rows which is 2^10 = 1024 ways. For each of those 1024 ways, we look at the 20 columns and update their sums. If any sum is < 0 we flip the column and now its sum is positive. This allows us to directly solve the pattern the columns need to be flipped in that can give a valid solution.
The only issue though is when the sum is zero. In this case we don't know whether to flip or not. Maybe out of the 20 columns, 3 will sum to zero (very few sum to zero and it does vary based on permutations but stays quite low) so that is 2^3 which is 8 possibilities. In the end the sign of the sums is checked and it is determined if we are the max sum. Only a few thousand possibilities and the operations during the iteration are quite simple (adding a few numbers and so if-else's). This results in a very fast runtime.

Permalink + Comments

'Optimal Movie Seating' Puzzle

June 24, 2007 at 06:18 PM | Facebook | View Comments

Well, it is about time that I start posting so here it goes. The other day, Dave, me (Ted) and a few friends decided to try to solve the Facebook Optimal Movie Seating Puzzle based off of an XKCD comic.

The gist of problem was that you are seating 16 people in a movie theater with a series of rules that assign points for different arrangements such as seating friends beside each other. You want to find an optimal arrangement that maximizes the 'social enjoyment'. Now with 16 people there are 20 922 789 888 000 permutations. This is a fairly large number number of possibilities to test each one on a normal consumer grade computer.

To tackle the problem Dave started writing a client-server scheduler to split up the work among multiple machines for a cluster-computing effect. I'd guess he will write up a post on it soon. Meanwhile, I began to tackle looking at ways to quickly evaluate the 'social enjoyment score' and also figure out how to narrow down the solution space so we didn't have to actually test all twenty-one trillion possibilities.

  • To handle the issue of finding scores we first agreed on an interpretation of the rules in which people had to be adjacent for the rules to take effect. Some of the rules involved pairs of people and other rules involved triples. We addressed this by generating a lookup table at the start that assigned a score for each sequence of three people. Using the lookup table we would find the partial scores of partial seatings as we permuted the possible seatings. Now this still involves checking every possible seating, albeit much faster than the first version.
  • To cut down the solution space we looked at the maximum value that appeared in the lookup table which was 4 for us. If only n people are left to be seating, we know at best our partial score can only improve by 4n and thus can know after seating only some of the people that no mater how we arange the unseated people, this will never beat our current best so we don't need to even try all those combinations. This simple heuristic made a huge difference in speeds (I think the factor was 10x but I'm just making that up off the top of my head).
  • Determined to get it even faster we looked into our lookup table some more. We realized that all the cases that had 4 points were disjoint and that only one of the could be true in a given seating. Once one of the 4 point arrangements was used the highest value was now 3 points. We updated our heuristic from 4n to 3(n-1)+4 which gave the code a much better early estimate on when a partial seating arrangement would not be able to outperform the current best score. This sped up code by another factor of 10x or so (again I'm making up numbers since i don't remember and our measurements were hardly scientific).

We decided we should let it run through the full set and turned on the compiler optimizer for another 2x faster. In the end our our runtime estimates went from the order of 1000 cpu-hours with the first version to under 5 cpu-hours in the end. This really stresses the benefit of optimizing your algorithm versus optimizing the code itself.

I'm sure this problem could be solved much much quicker and we may make another attempt at improving our method.

EDIT: It seems this wasn't clear (which was my fault), but as pointless as it may be, we were aiming for a solution that ensured it covered all options in case of local minima (which for this problem was pretty much a non-issue anyways). There are certainly far faster and more practical probabilistic ways as people have been pointing out.

Permalink + Comments