From Flask to Serverless

Dan Lindeman

 9 min read

At Spantree we work with a variety of clients, from startups to enterprises. Regardless of their company size, all of our clients want to save as much as possible on IT infrastructure costs. Follow along as we take a traditional backend Flask app, and use it to explore serverless architectures and the Serverless framework.

Why Serverless?

Like arguments for migrating from on premise to cloud hosted solutions, serverless architectures may offer lower costs, better security, and relative ease of administration as compared to traditional infrastructure. It may cost less, but this gain is only realized if you study your domain, the access patterns of your applications, and know the ins-and-outs of the cloud provider you deploy to. A heuristic we often use is, "If your application can't scale to zero, serverless probably isn't the right choice." Cloud hosting has helped hundreds of companies dramatically cut the cost of their IT infrastructure by eliminating the need to purchase, power, and administer their own hardware. Serverless architectures are a natural step in the progression from owning a complete set of hardware to truly paying-as-you-go, if you know what you're doing.

Hansel. So Hot Right Now. Hansel.Hansel. So Hot Right Now. Hansel.

So Hot Right Now

Whenever I encounter something that is so hot right now, I approach it with cautious optimism. I'm used to deploying my backend code to a server somewhere, so imagining a world where that isn't true feels alien.

Fortunately, serverless doesn't really get rid of servers. Instead, it removes the need to administer them. It also offers huge productivity gains in event driven architectures. Before we go off the deep end, let’s familiarize ourselves.

What do you mean there are 'no servers'? - Me at first, probablyWhat do you mean there are 'no servers'? - Me at first, probably


If I'm learning something new, it helps me to compare it to something I know. If you're a backend developer, you're likely familiar with web frameworks like Django, Rails, and Spring Boot. For smaller applications, I tend to use Flask. Let's take a look at a Flask app and convert it over to serverless. There are many ways to create serverless applications.
One framework I enjoy is the obviously named Serverless.

The App

Employees at Spantree love ice cream. So much, in fact, that we needed to make a webapp to keep track of our favorite flavors. In the App, we’ll define two endpoints, one that retrieves the entire team's preferences, and one for a single person’s favorite flavor.



In Flask

With Flask, our routes are handled by using the @app.route decorator.



I’ll also add a requirement that these endpoints need to be handed a JSON Web Token, or JWT for short. We can ensure that this is done by using the flask_jwt_extended module and decorating our routes with @jwt_required. If a user hands over a valid JWT, then access is granted. After all, we can’t have the whole world knowing our favorite flavor of ice cream! I realize I’m conflating authorization and authentication here, but for the sake of this app, let’s assume that if I know who you are, you’re allowed to know our favorite flavors of ice cream.


A Whole New World

Shining, Shimmering, ServerlessShining, Shimmering, Serverless

In Serverless

Let’s look at the same application, but done using Serverless. The first major difference we’ll see is the serverless.yml file. Instead of handling our routing with decorators, we’ll define routes using this definition file. The very first section is where we name our service, choose our runtime, and lock in our cloud provider. Here I choose AWS, and Python 3.6.


In the plugins section of the serverless.yml we declare serverless plugins that we want to use. For instance here we have one to package a Python app and one to use the amazing serverless-kms-secrets plugin for managing secrets.

In tandem with the requirements plugin, I also needed to define some custom attributes in order to deploy a python application, learn more about the reason for this here.


Finally we can start writing our functions, or at least defining them. We give our first function the name team and declare that the function to be run when this lambda is invoked lives in in a function called team. Events are king in the serverless world, and the events section of the function is where we tell our cloud provider what kind of events (notice that this section is a yaml list) will trigger our lambda. Here we have one, an HTTP GET request to /team


The next definition in this section is for our ice-cream/{person} endpoint. Just like before, we define the events this will respond to. This function also says that this request will include a path parameter person which we will be able to access when our lookup function is invoked.


Alright! We’ve got our API, now time to secure it. Inside the function section, we can define a function called authorizer, which we can use to act as an arbiter of policy documents for us. It looks something like this.


In order to attach this authorizer to any function, we just have to bolt an authorizer section to whichever functions we want to call the authorizer. Here is what lookup looks like with an authorizer.


Now that we have our resources defined, routed, and protected, it’s time to write the functions that will be invoked. So now in we can start with the following skeleton.


You’ll notice the similarity of the signatures for all of these functions. If you've written a Lambda before, then the function signatures ought to look quite familiar. They all take in an event object which contains information about the event that invoked this function (recall, events are king in serverless architectures). Each also has a context object parameter which includes things like Time-to-live and runtime environment information. Further, the authorize function will be a little bit different from our other lambdas, as instead of being allowed to return whatever-you-feel-like, you have some very specific and limited options. You have to and context.succeed with policy documents returned in the payload to authorize this user to access our endpoints.

Wrapping it all up

We’ve seen a Flask app turn into a beautiful serverless butterfly in order to contextualize ourselves in the world of serverless architectures. Unfortunately, I hardly think that a HTTP API is the domain where serverless shines brightest. The reason is if your application can't scale to zero, serverless is a little bit too much for what you get. Let's suppose the access of this application is basically constant throughout the day, we likely would not see a significant cost benefit over an always-on EC2 instance. If instead our application is slammed every day at 3PM and is a ghost town otherwise we'd be making bank on the idle time we no longer pay for.

Other than cost, let's consider topologies that do something like insert a document into an Elasticsearch cluster whenever a file gets uploaded. If you've ever tried to do Change Data Capture (CDC) in a traditional relational database, you know the pattern, and you know the heartache. Owing to the massive surface area of "Things that can invoke a lambda", in serverless we can assemble our CDC architecture out of best-in-class infrastructure instead of "Whatever is supported by my RDBMS".

Speaking of events and event driven architectures, join me next time as we get Dynamo to automagically index an elasticsearch cluster with a few lines of yaml and a couple lambdas.

Need Help Getting Started?

We've been there and back again writing serverless applications for our clients and for ourselves. Getting started can be tricky, but if you want to learn the ins and outs I suggest learning the underlying cloud provider first. We got our start by working towards AWS certification using the incredible a cloud guru AWS Certified Developer course.

If you'd like to chat with us about adopting serverless architectures or think your team could use a boost, we'd love to hear from you! Send us a line at to get the conversation started!