Wow, did I ever open Pandora’s box when I started reading about REST! What I thought would be a fairly straightforward exercise of choosing the URL structure and working out how to version the API turned out to be mind-bending (in a good sense). I’ll expand on this more as I get my paper into shape, but in the meantime, here are some of the things I’ve been reading about.
Roy Fielding, one of the original authors the HTTP 1.0 and 1.1 RFCs, introduced REST in chapter 5 of his 2000 dissertation: Architectural Styles and the Design of Network-based Software Architectures. What I never realised until now is how big a vision the designers of HTTP had: they were actually trying to build into the network an architecture to support large-scale distributed systems, the scope of which we as an industry are only just beginning to glimpse.
API versioning fiasco
Every man and his dog has an opinion on how to do version control in REST APIs. In 2012, Tim Wood published a concise, helpful summary of the then-current state of which parties advocate for or implement which method of versioning. His results were both interesting and confusing: basically, this is an area where the commercial world almost (but not quite) settled on one approach – embedding the API version in the URL – in complete disconnect from (or perhaps ignorance of) the research community.
What is REST, really?
This disconnect exhibited itself in a number of ways, and was indicative of a broader underlying misunderstanding. As early as 2008, Fielding complained that people really weren’t getting the point of REST. He explained in a number of different ways the idea encapsulated in his final point:
A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. … [Failure here implies that out-of-band information is driving interaction instead of hypertext.] (Emphasis in original.)
To put this another way: the client’s choices for interacting with the server must be wholly dictated by the content of the server’s response to the initial request. No knowledge of the server’s URL structure should be assumed or required on the part of the client.
A lot of people seemed to struggle to understand Fielding, as evidenced by the quantity and quality of the questions his post generated in response. (It certainly took me several reads through the material to unwrap what he was getting at.) But in the last few years, things have started to change. A series of blog posts by Mark Nottingham, chair of the IETF HTTP working group, started to show me how discussion about versioning was missing the point, and that there are bigger issues at stake:
In addition, RESTful Web APIs, by Leonard Richardson, Mike Amundsen, & Sam Ruby, was very helpful for me in unlocking these concepts – they even include an guide to Fielding’s dissertation as an appendix. The big ah-ha moment for me was when they demonstrated with JSON objects how to do and not to do collection navigation. I realised that nearly every web site I’ve ever visited re-implements this in different, incompatible ways that are non-machine-navigable. By following paradigms of hypermedia, we will not only save ourselves a lot of work, but allow for the next generation of API clients.
Anyway, stay tuned; this might seem a bit abstract at the moment, but I think it will have important concrete implications for LibreNMS and what we can do with our API.