2017-01-04

by Brandon Knitter | Technical Consultant at Taos

You know what sucks?  Servers.

You know what sucks more?  Servers that break.

You know what sucks the most?  Servers that break and require maintenance.

Virtual machines have brought us a long way, and more recently containers have taken us even further.  DevOps automation tools such as Chef, Puppet and Salt have made life much easier and much more repeatable.  Management tools such as Kubernetes Mesos, and Docker Datacenter have made managing at scale possible.

Event streaming techniques such as pub/sub have been made possible with tools such as Kafka and are established as services at cloud providers such as Google Cloud Pub/Sub and AWS SNS.  Programming languages support this event-driven paradigm with solutions such as RxJava.

But in each case we are still left with the burdensome task of managing the supporting infrastructure.  What if we could throw all of this away and deploy directly and solely the very code that matters most to our business?

To some extent, PaaS has done the work of removing the infrastructure knowledge required to run in a traditional manner where servers (physical or virtual) are manually or automatically managed.  Solutions such as Google App Engine and Heroku allow a developer to deploy their application without any need to worry about the infrastructure, defining simple rules about how the application should be run.  But there is still quite a bit of overhead, and if the deploying engineer doesn’t understand the scale or dependency restrictions things can go poorly.

What if we could do away with that overhead, what if we could cause our business logic to be writ directly upon the cloud, deploying only the code that matters, finis, crowd goes wild, cake is served?

Get ready to take off your pants, because serverless is just as liberating and similarly expands your range of motion leaving you with one less thing to worry about.  Serverless has been formalized very recently as FaaS, or Function as a Service, and is the latest aaS you can stick in your back pocket..

Cloud providers are hurrying to provide industry-leading FaaS solutions; AWS has Lambda, Google has Cloud Functions (in alpha) and Microsoft has Azure Functions.  At present, all three appear to function similarly.  Even IBM is even getting into the game with OpenWhisk which is both supported on the BlueMix cloud, and as an Apache open source project which you can use to roll your own FaaS solution.

Here’s how it works: logic is written in one of the supported languages (at press time, JavaScript seems to be the most common) and the script is uploaded.  Then the FaaS solution is told  when to execute that logic.  For instance, the logic could be triggered upon an event such as an object change in an object store, or when a URL is called.  Each time the function is triggered and some set of parameters are passed along

Examples of how FaaS could be used include:

Online calculations such as a mortgage calculator or a currency converter

Event persistence such as taking an event and placing it into a data store

CRUD operations from web and mobile applications

Metrics aggregation such as collecting time series data

Transformations such as converting an image from one type to another

Triggers such as modifying a data store after an object is written to an object store

Each of these examples are arguably very small, and this is as intended.  FaaS is designed to execute small bits of logic similar to how a microservice may operate.  It’s not inconceivable that 10’s of these function could be chained together to complete some greater business need.  If you want to get your feet wet check out webtask.io, it’s a great way to play with FaaS for free.

The primary desire for keeping things small is for reduced startup time of the function.  Each time the function is executed some backend capacity is allocated to support this request.  In some cases where the function is called very often the allocated capacity can be reused further reducing the startup time.  Startup times include the compilation of the logic as well as making the logic available through a number of network configurations.

Small functions can start up in as little as 10 milliseconds.  This is pretty quick when compared to other solutions such as VMs which can take up to 10 minutes to boot, or Docker containers which can take up to 1 minute.  Since being fast and keeping things lean is incredibly important for a FaaS solution it is important to focus on splitting your problems domains during the design phase.

And why is this speed so important?

If your application drives traffic which bursts in short time periods, the overhead of allocating VMs or containers reduces your ability to service customers quickly and adds to the cost overhead.  Dynamic scaling of VMs is great, but when the startup time is large the risk of not serving calls quickly can further extend service latency.

One of the additional benefits of FaaS is the incremental billing.  Instead of billing in increments of hours for an instance that may run for a few seconds (or milliseconds), FaaS solutions bill by the individual request and runtime duration increments measures in fractions of a second.  This significantly smaller increment can very likely decrease the overall bill for services which are invoked infrequently or have volatility in dynamic scaling.

But startup time and cost aren’t the only factors, there is also the forced benefit to architecture design that will come out of this usage.  Utilizing FaaS will force a design similar to microservices, compartmentalizing boundaries (domains) of functionality from one another…disallowing any monolithic designs from creeping into your environment.

But it’s not all unicorns and rainbows, there are a number of things to watch out for in this early and highly volatile space.  Some considerations include:

Vendor Lock-in – many of the mechanisms for deployment and the default system libraries are specific to the cloud provider.

Limited Functionality – the functions themselves are limited to the libraries included and really only achieve common functionality by integrating with the cloud provider services.

Short-Lived Processes – while it’s not clear on the recommended length of execution, function are not intended to be persistent of excute for long periods of time (AWS Lambda limits to 5 minutes).

Stateless – there is no persist memory between executions (although this is not a bad design practice regardless of a service implementation).

Startup Time – while most startup times are quoted in the low double-digit millisecond range, including large amounts of libraries can incur significant startup time.

Traction is picking up as more cloud providers are offering a FaaS solution.  There is a vibrant community with a number of conferences this year alone set to gather those of like minds.  Watch for more in 2017 as the serverless trend appears to be picking up!

One final word of caution:  This is a very immature space and things are moving rather quickly.  Be careful jumping in too quickly as things could change and affect your application in significant ways.  Nobody wants to be left out to dry, hung on the clothesline next to their discarded pants

Martin Fowler

http://martinfowler.com/articles/serverless.html

Good write-up of the serverless landscape and current state of things

Thoughtworks Technology Radar

https://www.thoughtworks.com/radar/techniques/serverless-architecture

Their definition of what serverless architecture really means.

AWS Lambda

https://aws.amazon.com/lambda/

Google Cloud Functions

https://cloud.google.com/functions/

Azure Functions

https://azure.microsoft.com/en-us/services/functions/

IBM Bluemix OpenWhisk

https://www.ibm.com/cloud-computing/bluemix/openwhisk

Provided as a hosted solution similar to other cloud provider

Apache OpenWhisk

http://openwhisk.org/

The IBM contribution to open source

webtask

https://webtask.io/

A good example of a quickly deployed serverless implementation for extremely cheap

Serverless Conference

http://serverlessconf.io/

Everybody needs a conference for their ideas, this is the one for serverless and FaaS

The post Going serverless…it’s like going pantless. appeared first on Taos.

Show more