Servers vs Serverless
The term “serverless” provokes a lot of ire. The truth is it’s a tool in a toolbox, and one should both understand one’s tools and know when to use them. Application developers, it turns out, do not need to understand, except in their side capacities as sysadmins (not usually a thing in larger engineering organizations that I’m aware of). The easy thing would be to say that if your application developers need to know how their application is being deployed, then you/they have used inappropriate abstractions. Tougher is to admit that tooling and education on the subject just isn’t at the point where people can be expected to get the abstractions right, or even be aware of their options.
Here I’ll design a sample web application, and try to do it using abstractions that eschew deployment concerns. Of course, you can’t truly be oblivious of the deployment situation from an architectural perspective, but you can design development workflows and tools that enable application developers to be oblivious.
The question at hand is: What is the abstraction that application developers should interact with? The goal is to enable them to write code that incorporates both serverless components and components that need to be run on more traditional infrastructure. Wait, why is that the goal?
Rethinking Microservices
Microservices, too, draw much scorn. Again, one must understand one’s tools. Others have written recently about how microservices came to be misused and misunderstood, so I won’t rehash except to say that independent systems can be scaled and maintained independently. The point of microservices is to keep your independent systems independent, reaping whatever the benefits of that may be (and they are often social/organizational as much as they are technical). There are also consequences to how you have to model your problem when you are working with a distributed system. Orgs that fail to solve these problems in a consistent way are relying on developers to do it, and therefore also to understand how to do it, which is a subtle form of scope and domain creep.
I’m not trying to convince anyone to adopt microservices. But many organizations already have, and those organizations are already dealing with the consequences of that architectural choice. These orgs, if they want to improve their lives, should develop paradigms and tools to enforce those paradigms so their developers can separate the concerns in their minds.
Lambda Functions
First let me clarify - by lambda function I mean any service run using a function as a service model. This could include self hosting. There’s no guarantee that you are or should be using lambda functions. Reasons you might:
- Highly modular model
- Process only runs when needed - cheaper
I dunno. That’s probably enough - both of those will have a big impact at high enough scale. Modularity makes maintenance easier and even small cost savings add up fast.
Lambda functions have some differences from traditional processes:
- Fundamentally ephemeral - all permanent state needs to be accessed via a remote service of some kind.
- What storage is provided is generally small (though this is probably configurable when self hosted).
- Networking - Lambdas are usually run in isolated networks, meaning services they depend on need to be configured to be accessible.
- If for example you are using a database on AWS, you’ll need to also use RDS Proxy to handle the ingress.
- Since connections aren’t long-lived, you need to consider things like connection pooling as well.
If a common framework (conceptual or otherwise) is not provided for dealing with all these and whatever other issues you run into, every developer will need to solve these for themselves, leaving the possibility for error in many implementations rather than one, and creating inconsistency across your codebases.
Lean Into It
Once you’re already incurring the architectural overhead of microservices, you can start playing games with exactly what the “unit” of a microservice is. You can basically deploy something as big or as small as you like. There may be more than one way of modeling things but one of the most basic ways to think about a web application is as an event-based system. If you use kafka, sqs, or sns this model is natural - those things are literally pub/sub messaging services. Websockets are also event/message based. But what about run-of-the-mill HTTP requests?
HTTP requests are pretty* easy to model as events as well. The event is the request coming in. That request contains information - headers, the payload, etc - which comprise the event message body. Responding to requests is another message sent back to the requester. The mechanics of the message passing are abstracted so that developers need only think of things as HTTP “events” and HTTP event “handlers”.
*If you noticed that streams do not play nicely with this model, you get bonus points. Unfortunately these cannot be redeemed for anything.
There’s nothing special about HTTP requests - an “event” is whatever you say is an event. If you have a codepath that is designed to only run when what you consider to be an “event” has occurred, then you can simply… “emit” an event (whatever that means) at that point in the codepath. And you can set up code to handle those events.
I’m going to assume you get all of this. If not maybe I’ll write more about it another time. For now, the point I’m making is that whether you run a webserver that handles a collection of routes, or multiple lambda functions that each handle a single route, needn’t concern the developer, if you give them appropriate tools.
An Example Application (Framework)
I promised an example. Let’s look at an application that’s a collection of 6 endpoints:
POST /users
GET /users
GET /users/:id
POST /posts
GET /posts
GET /posts/:id
This provides endpoints for registering and getting users, creating and getting posts. Suppose that this is a monolithic application. We’re not going to evaluate whether it needs to be a monolith because this application already exists. Instead we want to evaluate the best way to add a new piece of functionality: doing an address geolocation lookup. This endpoint will take an address, return a cached lat/lng if it has one, and if not look it up on a third-party maps service before caching and returning.
What’s the easiest way for this application team to ship this? Just add a POST /address/latlng
endpoint to the app, right? Maybe, assuming:
- You’ll never need this lookup functionality in another application. As described that’s a pretty generic, reusable feature. Implementing it directly in the application would complicate its reuse.
- The application team has time to build it. You can’t always pick which resources are available. What if another team or external contractor has to contribute this? Are they going to submit directly to another team’s codebase? This doesn’t have to be horrible, but as a developer having to keep lingering memories of a system I almost never touch is an overhead I’d personally prefer to do without. If you use someone external this is an even less attractive prospect.
- The monolith already contains a cache and there are no concerns using shared resources. Cloud services and high availability are about graceful degredation. By coupling these endpoints, you entwine the fates of these separate functions in production.
There doesn’t seem to be a need for this piece to actually be a part of the monolith. There are no dependencies on its data or functionality. This opens up the option to deploy it separately. This is - importantly - an option, and you can decide whether it makes sense to take it. I listed some of the downsides to a monolithic setup above. Downsides of separation include:
- More complicated deployment. You are now deploying multiple applications, presumably behind one URL. You now need an API gateway or reverse proxy if you didn’t already have one. You may need multiple load balancers. It’s messy to manage, and it can get expensive.
- One team, two concerns. Maybe your app team does have time to build this. Now they have two responsibilities instead of one. Is this a lambda that has a specific handler method signature? Does it need HTTP server boilerplate? Where does that come from? This is where proper tooling comes into play. The developer’s workflow shouldn’t change based on how their code is deployed.
Request Handlers
What is an HTTP handler? Express.JS handlers look like this:
router.post(
'/address/latlng',
parseJSONBody,
async function (req, res) {
const payload = req.body;
let latlng = await cache.get(getCacheKey(payload));
if(!latlng) {
latlng = await maps.geolocate(payload);
await cache.set(getCacheKey(payload), latlng);
}
res.json(latlng);
}
);
This could be completely self contained. This could be added either to the existing monolith or to a brand new express application. It could even be transformed to accept the function(event, context, callback)
signature that lambdas use. But all three of these would involve some thought on the developer’s part as to how this code precisely plugs in. What if all the developer had to contribute was the function itself? What if the ops/infrastructure pipeline knew how to incorporate such a function into an existing express application or a standalone function? How would that look?
First, let’s define our deployment options. We’ll use the three above - existing express, new express, standalone lambda.
Next we have to pick what the common method signature looks like. Let’s consider two options: the expressjs signature and the AWS lambda signature:
function(req, res) {}
function(event, context, callback) {}
These are the most basic options you could consider. Neither of these is ideal if you want your handlers to be generic regardless of event source. That is, if you want to be able to write an HTTP handler and queue handler that are identical. This would require some design around what a generic payload might be to represent any kind of event. Why would you want to do that, you may ask? Good question, I’ll check into that. For now however, for simplicity we’ll assume we’re just talking about HTTP.
At this point if we were actually building this we’d encounter an inescapable truth: reality is messy. No matter what tooling you introduce - linting, wrappers, etc - you may not be able to guarantee someone doesn’t write a handler that is only valid in one environment (although some handlers may implement functionality that requires a specific runtime). You can, however, do a pretty good job. So we’ll assume that this function can be imported as a module and wrapped suitably in each environment regardless of the signature we choose.
There are existing packages that can make it easy for a lambda function to invoke an express server, and no packages that I could find that enable a lambda handler to be passed to the express router. For ease then, we’ll pick the express format.
What are the real deliverables we need from developers? Arguably, the API path (/address/latlng
) is not one, as the API structure may be owned by someone specific, or another team. Realistically, as long as path variables required by the handler are represented in the path, the path can be anything at all. I’m open to arguments for adding it, but I’ll omit it here.
The handler method on the other hand (GET
, POST
, etc) is a good thing to specify as a deliverable. I can’t think of a good reason to leave the invocation method up in the air, even if the handler is written in such a way that it would work with multiple methods. In the worst case, whatever invokes the deliverable can choose to have configuration that overrides the recommended method.
Validation could be optional but it’s a good idea. Validation is an easy thing to configure in a generic, programmatic way, so allowing a developer to specify it is convenient. This is relevant to the situation mentioned earlier where a URL path may include a dynamic variable.
Authentication and other middleware to pre-process the request is also useful to be able to specify, but for simplicity we’ll leave those off of this. Ditto for specifying request/response content-type/accept values, which could be used by the harness to attach parsing or additional validation middleware.
Since any function could run independently you’ll likely want a way to pass environment variables to it. This is a whole other can of worms that we’ll set aside for now.
Finally, as I mentioned earlier, some functions may only be suitable for deployment in a specific kind of environment. If the local filesystem is used to store state between invocations, or native software incompatible with the lambda runtime is needed, you may need to force one environment or the other. So we’ll have a field for that.
So in summary, what we want from developers, for any given endpoint, is a module that looks like this:
module.exports = {
method: 'POST',
validation: {
/* query: aJSONSchemaObject, */
body: anotherJSONSchemaObject
},
// runtime: 'server',
handler: async function(req, res) {}
};
It’s fairly straightforward to build an express-based harness that adds these as routes to the application automatically. You could seed the harness with a file that looks like this:
{
"/address/latlng": "handlers/address-lookup.js"
}
And use that to generate your whole API dynamically. If you want to run via lambda, you can wrap the express application with the aforementioned library. Adding an endpoint to an existing service is easy if it uses this framework. If not, why bother? Just run it alongside. With everything behind a gateway you can make the API look however you want it to, no need to constrain your backend architecture for that.
Of course, this all depends on such a harness existing, and the ops pipeline having support for this implemented. But you can do it, and there are clear benefits.
If there is functionality that depends on the deployment environment, common packages that wrap the functionality should be supplied for developers to use. You essentially provide a framework or platform for development. Your imagination is the limit.
What
The idea I’m trying to convey here is that if we want flexible architectures - and I assure you we do, we need small and versatile building blocks. Ideally we want for tooling to already exist around these building blocks, but the better resourced an organization is the fewer excuses it has for not investing in its own infrastructure. Well-designed building blocks can make not only development but also building tooling easier.
What I really want is for web development to be easy to think about. Modern architectures are complex but resorting to the old-school architectures we all are used to doesn’t help solve the problems that complexity is there for. I have nothing but messy thoughts on this subject so I will almost certainly write more in order to clarify. Hopefully you found this meandering stroll interesting (though if not try not to dwell on the time you wasted).