Benefits: Administrative Staff

Behind the academic side of the University lies an army of support and administrative staff. It’s their job to look after the bits and pieces which let the University actually function, and Total ReCal was built not only to help students and academic staff get on with their life but also to make the business of supporting them easier.

First of all, Total ReCal helps staff by making any updates to calendar information replicate around the system as rapidly as possible. Since our Nucleus events platform serves as a single source of information any updates directly on Nucleus are instantaneously reflected in views on the data, and even changes to imported data such as timetables and assessments are shown a lot faster than before. A reduced lag time between making a change and systems updating means that it’s easier to make changes and cancellations to events, and that more people are likely to be informed of those changes. For some alterations we even have change detection which can be hooked into notification systems such as text, making cancelling a lecture and notifying students a simple operation.

There’s also a related benefit to the rapid updating of information in that Total ReCal (or, more accurately, Nucleus) represents a single location for calendaring data, meaning that it’s a lot easier to draw on collated data. This may initially seem like a somewhat ‘fluffy’ feature which will never be used, but with a little bit of thought it’s easy to see how it can help drive decisions. For example, we can draw pretty graphs of room usage over time and spot peaks and troughs, enabling smarter timetabling. We can detect collisions across disparate systems, reducing confusion over resource allocation. We can monitor assessment ‘pile-up’ to help spread the workload of students more evenly. In short we gain the ability to draw up reports on just about anything in real-time.

Finally – and most importantly – Total ReCal will make it easier to do things the ‘right’ way (ie by managing events centrally) rather than by groups or departments going off and doing their own thing. Where in the past things such as induction timetables were managed by departments and distributed (literally) as a gigantic Word document it’s now easier for everybody involved to just use Total ReCal and to distribute the result over My Calendar. Students can be forcibly added to calendars such as induction or exams, and our code takes care of all the hard work of making sure your events don’t collide with another. Even better, there’s no real deadline for content creation because there’s no publishing deadline. Change the event centrally and the change ripples out to all the users and other systems relying on calendar data within minutes.

We hope that within a year Total ReCal will have prompted students to demand that their departments use centralised timetabling and assessment deadline management, leading to a more unified, reliable, easy to use and just better looking life.

This isn’t your grandmother’s API permissions control layer…

I’m guessing your grandmother probably didn’t have an API permissions control layer, but if she did this wouldn’t be it.

This post is mostly about Nucleus, our name for the storage layer which drives the Total ReCal components. The only way to communicate with Nucleus is over our RESTful API. This comes as somewhat of a shock to some people who believe that the way to move data around is a batch script with direct database access, but I digress…

What I’m going to try to do here is summarise just how epically confusing our permissions handling system for Nucleus is, mostly for the benefit of Alex and myself who (over the next week or so) will be trying to implement this layer without breaking anything important. It’s really, really essential that we get this done before we start promoting the service because of a few simple reasons:

  • Data security is important, and we don’t want anybody being able to read everything without permission.
  • Data security is important, and we don’t want anybody being able to write all over the place without permission.
  • Changing this kind of thing on a live service is like trying to change the engine block on a Formula 1 car whilst it’s racing.
  • We need to be able to guarantee the system can hold up to DoS attacks or runaway processes hammering the APIs.
  • People are already asking for access to this data for important things, like their final year projects.

So, where to go from here? Let’s take a look at everything which will be going on in the finished version.

Server Rate Limiting

Even before the Nucleus code kicks in, the server is fine-tuned to avoid overloading from any IP address or hostname. Using a combination of the OS firewall and the web server configuration overall request rates and bandwidth usage is kept below thresholds to ensure that the server is never overloaded. Due to the RESTful nature of the API (in which each request must represent a complete transaction) we have no requirement to ensure server affinity, so if the load gets too heavy we can easily scale horizontally using pretty much any load balancer.

To keep the pipes clear for our ‘essential’ services we do maintain a whitelist of IPs which have higher (but still not uncapped) limits.

Key Based Access

The only way to access any data in Nucleus is with an access token, issued by our OAuth system. These come in two flavours, either a user token (which grants permission for a specific user), or an autonomous token (which is issued at an application level, and is ‘anonymous’). The very first thing that happens with any request is that the token it gives is validated. No token, no access. Invalid token, no access. Revoked token, no access. To keep things nice and fast we store the token lookup table in memory with a cache of a few minutes, since most requests occur in ‘bursts’.

Continue reading

Update

This has been a big month for Total ReCal. We’ve now perfected our event importers for Blackboard assignments and academic timetables, and we’ve started working on the main web application (screenshots too). We’ve also launched a beta registration page for interested staff and students to sign up for early access. Finally, our Talis Keystone service that the University has recently purchased will be in place very soon meaning we can also start importing book return dates for staff and students.

After numerous code re-writes we’ve got a rock solid API for adding, updating and deleting events in our Nucleus data store. Our import code has also had many updates to support logging of changes to events which will be invaluable to students to keep them up to date. Once the main Total ReCal application has been developed we’re going to sit down and work out how we’re going to best make use of these logs.

When a lecturer calls in sick the central timetabling department isn’t informed (unless it will affect lecturers for a long period of time). Therefore based on our current nightly timetable imports we won’t find out about any changes. We’re going to develop a tool for faculty administration staff to make changes to events as they’re going to be more aware of what the situation is day to day. This means that we can then inform students of changes that day as soon as someone changes it.

In terms of the front end, I’ve forked our common web design, called it ‘common web design x’, made it fluid to adapt to browser size, made it completely semantic HTML5 based, and taken the concept of progressive enhancement to new levels. It will also make use of our new OAuth 2.0 based single sign on service that I’ve written and it will automatically adapt to mobile layouts.

How we make things faster.

Today we’ve been playing around with our timetable parser to Nucleus connection and trying to work out why we were taking a projected 19 days to finish up parsing and inserting.

This was a problem of many, many parts. First up was Alex’s code, which was performing an update to the event on Nucleus for each one of the 1.76 million lines associating students with events. Great fun, since Total ReCal communicates with Nucleus over HTTP and our poor Apache server was melting. This was solved by using an intermediate table into which we could dump the 1.76 million lines (along with some extra data we’d generated, such as event IDs) and then read them back out again in the right order to make the inserts tidier. This reduced the number of calls to about 46500, a mere 2% of the number of things to do.

Next, we ran into an interesting problem inserting the events. The whole thing would go really quite fast until we’d inserted around 48 events, at which point it would drop to one insertion a second. Solving this involved sticking a few benchmark timers in our code to work out where the delay was happening, and after much probing it was discovered that the unique ID generation code I’d created couldn’t cope with the volume of queries, and (since it was time based) was running out of available ID numbers and having to keep running through its loop until it found a new one, taking around a second a line. Changing this to use PHP’s uniqid() function solved that little flaw by making the identifier a bit longer, meaning that the chance of a collision is now really, really small.

At the moment we’re running at about 33 inserts a second, meaning the complete inserting and updating of our entire timetable (at least the centrally managed one, the AAD faculty are off in their own little world) is done in a little over 20 minutes. We’ve had to turn off a couple of security checks, but even with these enabled the time little more than doubles and we’re currently not making use of any kind of caching on those checks (so we can get it back down again). There are also lots of other optimisations left to do.

A bit of quick number crunching reveals to me that we’re now running the process in a mere 0.08% of our original 19 days. Not bad.

How (And Why) We’re Building An API

We’ve explained what Mongo and NoSQL is, and why we’re using it. Now it’s the turn of the actual data access and manipulation methods, something we’ve termed Nucleus.

Nucleus is part of a bigger plan which Alex and I have been looking at around using SOA ((Service Oriented Architecture)) principles for data storage at Lincoln, in short building a central repository for just about anything around events, locations, people and other such ‘core’ data. We’re attempting to force any viewing or manipulation of those data sets through central, defined, secured and controlled routes more commonly known as Application Programming Interfaces, or APIs.

In the past it would be common for there to be custom code sitting between services, responsible for moving data around. Often this code would talk directly to the underlying databases and provide little in the way of sanity checking, and following the ancient principle of “Garbage In, Garbage Out” it wouldn’t be unheard of for a service to fail and the data synchronisation script to duly fill an important database with error messages, stray code snippets and other such nonsense which wasn’t valid. The applications which then relied on this data would continue as though nothing was wrong, trying to read this data and then crashing in a huge ball of flames. Inevitably this led to administrators having to manually pick through a database to put everything back in its place.

Continue reading