How we make things faster.

Today we’ve been playing around with our timetable parser to Nucleus connection and trying to work out why we were taking a projected 19 days to finish up parsing and inserting.

This was a problem of many, many parts. First up was Alex’s code, which was performing an update to the event on Nucleus for each one of the 1.76 million lines associating students with events. Great fun, since Total ReCal communicates with Nucleus over HTTP and our poor Apache server was melting. This was solved by using an intermediate table into which we could dump the 1.76 million lines (along with some extra data we’d generated, such as event IDs) and then read them back out again in the right order to make the inserts tidier. This reduced the number of calls to about 46500, a mere 2% of the number of things to do.

Next, we ran into an interesting problem inserting the events. The whole thing would go really quite fast until we’d inserted around 48 events, at which point it would drop to one insertion a second. Solving this involved sticking a few benchmark timers in our code to work out where the delay was happening, and after much probing it was discovered that the unique ID generation code I’d created couldn’t cope with the volume of queries, and (since it was time based) was running out of available ID numbers and having to keep running through its loop until it found a new one, taking around a second a line. Changing this to use PHP’s uniqid() function solved that little flaw by making the identifier a bit longer, meaning that the chance of a collision is now really, really small.

At the moment we’re running at about 33 inserts a second, meaning the complete inserting and updating of our entire timetable (at least the centrally managed one, the AAD faculty are off in their own little world) is done in a little over 20 minutes. We’ve had to turn off a couple of security checks, but even with these enabled the time little more than doubles and we’re currently not making use of any kind of caching on those checks (so we can get it back down again). There are also lots of other optimisations left to do.

A bit of quick number crunching reveals to me that we’re now running the process in a mere 0.08% of our original 19 days. Not bad.