For quite a few years I have been saying with respect to server side: “I make REST API”. I have read the spec, and by and large I am compliant to the REST objectives/notes. As someone who mostly works for startups, compliance to business requirements and situation-awareness is more urgent than architectural details that only have impact when doing more than 10,000 transactions per day.
I like the reasoning behind REST; it is a clearer and easier/ faster-to-test build style than SOAP or other XML RPC. It addresses many more concerns than the older CSV-over-HTTP approach. Most of this article is my opinion, and I am not claiming references.
Why REST is good
- Statelessness makes systems simpler, more reliable and more scalable. This statement is written to compare to the X11 windowing system, and some XML API that thought that statefulness was good.
- If each request could be entered by a human with a webrowser, it is much easier to test, to isolate failure, and to scale. This should be compared to database interactions, and certainly many XML API.
- REST is a short specification, increasing the odds that it is wholely implemented correctly. Also as it is short, and reuses technology; it is affordable.
- REST results can be cached, making them easily laterally scalable. This statement is significant when compared to some Enterprise XML software, which was too clever, has security features which suppress caching, and runs very slowly.
Many of these values are a optimisation towards a particular direction. Your current userstory may be leaning the other direction, in which case look at the systems I am comparing to. The most popular characteristics tend to swing between extremes.
Metrics by which I make REST.
- Keep all the exposed API Nouns in the domain layer, i.e. a customer would place a Order, then make a Payment, and later download a list of Invoices. This example is fairly generic, and frequently a more specific one should be used.
- Keep the API Nouns aligned with business process steps. If the user process justs grabs a series of answers from the user, then saves it; there may be little value in sub dividing the API, unless you are performing >80,000 requests each day. Although, with this example, a PAF would still be a different API call. There are articles written by fans of graphQL which claim that each REST API call should map to precisely 1 db entity (so as concrete example, to build an invoice is 20 API calls for each line item, then another to create the invoice with the address section).
- In some places, primary keys are needed in the client side, but in many places they do not make a good search term (e.g. searching for the order placed at '100 whodunnit street' makes sense; searching for the order placed at address Id '0x94563f2342e234564a345345dd' doesn't. How would the client or the user get this datum? ). In the frequent case, do not expose your Ids. More recent databases have good levels of flexibility on what data you can use for keys.
- Although there are security reasons why to return very vague error or rejection messages; I don't like this. HTTP204, HTTP403, HTTP214 are semantic responses, and useful. If it is critical to make it hard to attack a resource, why are you making it public? Having a web application that returns the homepage and HTTP200 for any HTTP404 response isn't very useful; and wont slow down any attackers by any statistically noticeable amount, as they will spot the identical responses. Concise HTTP responses will use less bytes, so cost less to run, on most cloud based services.
- Sometimes “its better English” to allow API clients more than one access path to a piece of data. This needs to be done carefully, as it is unneeded development expense. Concrete example: a banking API is likely to support download the log of a particular transaction, and all the transactions between a pair of limiting dates.
- As an observed heuristic, you may need to adjust structures if any complex-data API response (e.g. JSON or XML) is expected to exceed 1MB regularly. This generally takes a few second to parse on phones, and so needs to be chunked for better usability. If that 1MB of JSON represents 1000 orders, then it needs to be chunked so the user can find what they are looking for. For images, video or audio, this size isn't unreasonable.
- Trim whitespace on inputs that may be feed by copy and pasting.
- Making REST URN case insensitive, like the underlying DNS system is.
- For cachable GET request API, I try to get all the data points of the request in the URN (not as param). This maps for caching or analysis software better. I also do this for PUT API, as PHP support for HTTP bodies with PUT is weak/brittle.
- Choosing the response encoding type via an input in the request is a nice idea; but I have no commercial use for that. I also don't have many API that supply the same data in a different format.
- I use HTTP headers for data I don't want visible, but am sending. Classically a session identifier, or a HMAC hash.
- I only use PUT for idempotent requests (e.g. turn X feature OFF). I use POST for non-cacheable “create things” requests. In practice I think that PATCH is too complex to test, so has limited usefulness. A PUT request with the whole of a user profile is likely to be less than 2K of JSON, so a PATCH request of a change delta wont save any noticeable bandwidth or development time.
- As a UX heuristic, execution time of REST API MUST be kept to less than 3s, and SHOULD be less than 1s. For a pure B2B? process without a user interface, this constraint isn't relevant. Where possible, comply to it anyway to reduce load on the REST server.
- As a reflection on the last, APIs can be constructed with three calls to support creation, polling and lastly download of results. This can be used to supported “larger data system” requests. Example: reseller A hires me to make a nice shiny API to support their sales process; they wish to use Bank B for payments as they get very good rates, but Bank B's API is terribly slow. The async architecture allows better UX layer reactiveness.
- Although not a visible feature, all API points that are supporting a business contract should have request logging/ audit (e.g. a POST order to buy a 100 spigots must be logged).
- All current webrowsers support various HTTP compression schemes, many HTTP programming libraries also support HTTP compression. Using appropriate headers, I compress data when its in transit.