This isn’t your grandmother’s API permissions control layer…

I’m guessing your grandmother probably didn’t have an API permissions control layer, but if she did this wouldn’t be it.

This post is mostly about Nucleus, our name for the storage layer which drives the Total ReCal components. The only way to communicate with Nucleus is over our RESTful API. This comes as somewhat of a shock to some people who believe that the way to move data around is a batch script with direct database access, but I digress…

What I’m going to try to do here is summarise just how epically confusing our permissions handling system for Nucleus is, mostly for the benefit of Alex and myself who (over the next week or so) will be trying to implement this layer without breaking anything important. It’s really, really essential that we get this done before we start promoting the service because of a few simple reasons:

  • Data security is important, and we don’t want anybody being able to read everything without permission.
  • Data security is important, and we don’t want anybody being able to write all over the place without permission.
  • Changing this kind of thing on a live service is like trying to change the engine block on a Formula 1 car whilst it’s racing.
  • We need to be able to guarantee the system can hold up to DoS attacks or runaway processes hammering the APIs.
  • People are already asking for access to this data for important things, like their final year projects.

So, where to go from here? Let’s take a look at everything which will be going on in the finished version.

Server Rate Limiting

Even before the Nucleus code kicks in, the server is fine-tuned to avoid overloading from any IP address or hostname. Using a combination of the OS firewall and the web server configuration overall request rates and bandwidth usage is kept below thresholds to ensure that the server is never overloaded. Due to the RESTful nature of the API (in which each request must represent a complete transaction) we have no requirement to ensure server affinity, so if the load gets too heavy we can easily scale horizontally using pretty much any load balancer.

To keep the pipes clear for our ‘essential’ services we do maintain a whitelist of IPs which have higher (but still not uncapped) limits.

Key Based Access

The only way to access any data in Nucleus is with an access token, issued by our OAuth system. These come in two flavours, either a user token (which grants permission for a specific user), or an autonomous token (which is issued at an application level, and is ‘anonymous’). The very first thing that happens with any request is that the token it gives is validated. No token, no access. Invalid token, no access. Revoked token, no access. To keep things nice and fast we store the token lookup table in memory with a cache of a few minutes, since most requests occur in ‘bursts’.

Continue reading

How (And Why) We’re Building An API

We’ve explained what Mongo and NoSQL is, and why we’re using it. Now it’s the turn of the actual data access and manipulation methods, something we’ve termed Nucleus.

Nucleus is part of a bigger plan which Alex and I have been looking at around using SOA ((Service Oriented Architecture)) principles for data storage at Lincoln, in short building a central repository for just about anything around events, locations, people and other such ‘core’ data. We’re attempting to force any viewing or manipulation of those data sets through central, defined, secured and controlled routes more commonly known as Application Programming Interfaces, or APIs.

In the past it would be common for there to be custom code sitting between services, responsible for moving data around. Often this code would talk directly to the underlying databases and provide little in the way of sanity checking, and following the ancient principle of “Garbage In, Garbage Out” it wouldn’t be unheard of for a service to fail and the data synchronisation script to duly fill an important database with error messages, stray code snippets and other such nonsense which wasn’t valid. The applications which then relied on this data would continue as though nothing was wrong, trying to read this data and then crashing in a huge ball of flames. Inevitably this led to administrators having to manually pick through a database to put everything back in its place.

Continue reading

The Total ReCal Plugins

A specific problem that the university faces is the aggregation, integration and publishing of ‘space-time data’; that is, data relating to the use of space (i.e. room bookings, geo-spatial location data) and time (i.e. timetables, event schedules, library book returns).

This project will address this problem by developing plugins for existing university systems that expose useful data which can then be aggregated into new web-based services. One of these web-based services will be a new calendaring system for students (initially, hopefully staff later).

All student’s calendars will comprise of three core layers; academic timetable, assignment deadlines and book return dates. We will create plugins for the three DMS ((Data Management System)) the University uses for these, Blackboard, SirsiDynix Horizon (HiP library portal), and our in-house developed timetable system.

Because we will have developed a standard for storing space-time data from these systems we are also going to create a number of other plugins for other systems so they can add to the datastore. These systems include WordPress, and providing the University has moved to version 2007 in time, Microsoft SharePoint.

Detailed here are our initial ideas as to how we intend to develop plugins for the systems to access their data.

Blackboard

One of the big motivations behind this project is that, as students, there is no easy way of finding out hand in deadlines for assignments, being informed if the deadlines change, and seeing the deadlines marked on a calendar alongside our academic timetables (so that we can realise that we’ve got one week not two until that deadline!). For example at the moment, the media faculty releases an Excel spreadsheet that mixes deadlines for every module for every year group which isn’t very useful if I’m trying to work out what has changed if a deadline is updated.

By September all faculties will be using Blackboard for detailing assignments. Many already are, and some have been for several years. When creating an assignment, there is an optional field that the academic can fill in to specify the deadline. Unfortunately, less than 10% of assignments created on Blackboard during the last academic year had anything in this field. Another problem we have is that a number of schools and faculties are making use of the Turn It In service (via a Blackboard plugin) and we have yet to investigate how Turn It In stores the data in Blackboard.

As we understand it, and we will have this verified, the license the University has with Blackboard allows us to develop on top of the Blackboard API and also access the underlying database (which is MS-SQL based). As neither Nick or I are particularly well versed in Java, and also the API doesn’t seem to give us access to the information we need we believe the route we should go down is to access the data straight from the database.

Therefore we will create a script that will be executed on a cron job that checks for new assignments in the Blackboard database, and verifies the date and time of existing assignments. Additionally we will try and enforce that academics must use the deadline field when creating assignments.

Horizon

Through work that we’re doing on our Jerome “un-project” we have a head start on the accessing data from Horizon. The University has invested in Talis Keystone which integrated with Horizon and abstracts our the data over a friendly REST/SOAP web service. Using the APIs we’re developing for Jerome we intend to access book return dates for individuals and publish these as one of the Total ReCal layers.

Academic Timetables

Back in November 2009 I was incredibly bored one night and I hacked around with our student timetables to create subscribable iCalendar feeds. The script works by screen-scraping our timetables (here is mine) and then interpreting the JavaScript on the page to produce an array of events which can then be turned into ics format.

Our timetable system was written in-house many years ago so we’ve got a lot of control over the output. For the time being we’re not going to completely replace the HTML version of the timetables but add in a new script that will generate the ics feeds along with the timetable renders (this happens on a cron job at 3am every morning).

WordPress and others

A side project of mine has been developing a system that can add location awareness to our online services. When you visit one of these services your IP address is sent to this system and then matched against a list of IP ranges for the University’s wireless and wired networks. The response, if you are on campus is the building that you’re in, which campus you’re on and whether you’re on a wired or wireless connection. If you’re not on campus then it will list your closest campus and where roughly in the world you are (using the MaxMind database).

We will develop a WordPress plugin that will query this system when someone creates a blog post on our blogs.lincoln.ac.uk platform and then push this information to Total ReCal. A hypothetically mashup we could then build with this data something like a heat-map of blog posts tagged “research” and overlay this on Google Maps so we can see where the most research blogging is going on at the University of Lincoln.

When we know the situation with SharePoint we can also plan for potential plugins for it too.