I remember last year binge-watching the Serverlessconf recordings online thinking “If there is one conference I want to go this is the one!”. The talks were so utterly good that I felt bad for not having witnessed it live. I was looking for the coming Serverlessconfs but they seemed to be coming nowhere near me. Then suddenly they announced the event in Paris and I went crazy. Let´s do this!


On day #1 there were multiple workshops on various technology stacks. AWS, Microsoft Azure, Google Cloud, OpenFaaS and IBM Cloud Functions.

I chose a workshop that was organized by A Cloud Guru where we built a Youtube like video service website with AWS API Gateway, Lambda, Elastic transcoder, Google Firebase and Auth0.


The present and future of Serverless observability | Yan Cui

The first presentation of the day was about the present and future of Serverless observability. Yan Cui was pleasant to listen as he had a very fluent presentation style and insightful observations. He also pointed out an interesting whitepaper Serverless Application Lens for the AWS Well-architected framework.

The document covers common serverless applications scenarios. I encourage you to read it to understand AWS best practices and strategies to use when designing architectures for serverless applications.

The next presentation was given by Subbu Allamaraju from Expedia Inc. Expedia is American travel company that owns some of the most popular travel sites like and Trivago. He approached the concept from an interesting perspective highlighting some points on benefits of the serverless paradigm and in contrast common counter arguments like today’s limitations and constraints, personal perspectives of cloud, portability, lock-in, cost etc. He highlighted how past habits, culture and legacy can really slow down adoption.

The presentation provided ideas on how to tackle these blockers on the journey towards a platform of the future, where functions and events fit, and how to prepare ones colleagues for that future.


Serverless tools with Google Cloud and Firebase | Bret McGowen & Monika Nawrot

Bret McGowen & Monika Nawrot showed demos around the Google Cloud offering. Monika explained how Google Serverless offering has grown over the years and had a Google Cloud Functions demo with thumbnail creation function, storage & logging. She highlighted also that there is a possibility to use a Function emulator for debugging functions.

Bret McGowen vent quickly through some of the best Backend as a Service (BaaS) offerings out there and then focused on the Google Firebase. He explained that the Realtime database was the first Firebase feature but there are many more built-in features like Cloud storage, Functions, Hosting, etc.

He also made a bold statement which people seemed to find quite amusing. “More serverless than serverless!” =) . It sounded like he was selling artificial sweetener by claiming “Sweeter than sugar” and I found it triggering my bullshit detector. Luckily the demos totally convinced everybody as he showed the Firebase realtime update capability. Apparently Firebase is also handling situations when a client connection drops. The data syncs instantly when a client gets back online. All the Firebase features can be implemented directly in the client-side code using an SDK. The sugar on the top was BigQuery that could query strings on pretty much the entire universe in just mere seconds.


Visualizing Serverless Architectures: What does a healthy serverless app look like? | Kassandra Perch

Kassandra Perch talked about visualizing Serverless architectures and what does a healthy serverless app look like. Her demo included inspecting for example error rates and counting cold starts. She showed how to include IopipeLib, trace and profiler in a lambda function & how to use those. Interestingly the IOPipe wraps the whole lambda handler. The recommended tool IOPipe has a slogan “See inside you Lambda functions” You can find it here:

IBM Cloud Functions & Apache OpenWhisk: Overview and customer scenarios | Frédéric Lavigne

Frédéric Lavigne introduced IBM´s Serverless offering and clarified the difference between IBM Cloud Functions & Apache OpenWhisk. OpenWhisk is open source and used in the IBM managed service. The IBM Cloud team contributes to the open source project. Runtimes include: Node 6, Node 8, PHP 7.1, Python 3 and Swift. Additionally, OpenWhisk can run any custom code put in a Docker container.

In the demo he defined a smiley face as SVG inside the Function code. This was enabled by the RAW HTTP handling setting. Unfortunately, there was some demo effect with API Gateway as he tried to show how to use Web actions and API Gateway. IBM Cloud Functions have some built-in event sources such as Cloudant, Message Hub and Github. Currently they are developing a composer for cloud function flow control.

The presentation included some interesting customer cases such as the DOV-E Hypersonic-based Mobile Device Communication. They had Coca Cola TV advertise trigger a hypersonic message on the viewers phone. Viewer could not hear the communication, but they received a message on the phone asking whether they would like to drink some Coca Cola right there and then.

Event specifications, state of the serverless landscape, and other news from the CNCF Serverless Working Group | Daniel Krook

Daniel Krook had a very informative talk about Event specifications, state of the serverless landscape, and other news from the CNCF Serverless Working Group. CNCF stands for Cloud Native Computing Foundation which drives the adoption of the new Serverless computing paradigm.

They have multiple projects under their wings such as Kubernetes, Prometheus, Fluentd to name a few. However, currently there are no serverless projects. They consist of four active CNCF Working groups (Continuous Integration, Networking, Storage, Serverless) and have now a whitepaper available for review. It is hosted in Github for collaboration purposes and they are accepting pull requests to keep up with the fast phase of development in Serverless. Please find it here:

Accelerating DevOps with Serverless | Michael H. Oshita

Michael H. Oshita talked about Accelerating DevOps with Serverless. His excellent demo showcased how to provide automated branch testing infrastructure for Devs without burdening Ops. Technically the setup uses Github hooks that trigger Circle CI, provisions using Terraform templates and Lambda functions are orchestrated with Step Functions. The infrastructure is created on pull request and destroyed when branch is merged and deleted. See the ECS Deity -project here:

End-To-End Serverless | Randall Hunt

The most entertaining talk of the day was a live coding session with technical evangelist Randal Hunt. He´s opening line was surprisingly in French and the goofball show made everyone crack up. He also asked audience for suggestions on what to build there on the spot in front of the audience. Not something most presenters would want to try I imagine. The randomness didn´t seem to bother viewers and he did manage to finish the live demo just in time which made the organizers happy. The energetic personality and laid back attitude with relevant content was a great combination and I hope to see more of this in the future. If there is one recording you should watch, it would be this one in my opinion.


With Great Scalability Comes Great Responsibility | Dana Engebretson

Dana Engebretson had a pleasant talk telling an interesting story about her journey to Serverless architecture for analytical purposes. Humorous croissant baking tips and to-the-point analogies made it inspiring and entertaining.


Overall the talks were versatile and excellent. There were also some less technical talks but nothing really outside the Serverless focus which was good as it didn´t seem like the organizers had struggled finding relevant and good presentations for the event. I recommend getting involved in your local cloud computing communities. Serverless development is super-efficient and enjoyable so if you haven´t tried it I suggest you do so without further due.

The author Jouni Leino has 15 years of experience in IT. He lives in Stuttgart Germany working as a Field Application Engineer in the Automotive industry. As Serverless Competence Lead he´s mission is to guide customers and colleagues towards a more cost effective and productive future.
Follow in Twitter

Hedge – Building a new cloud deployment tool with service abstraction

At the start of my Siili career as an apprentice, our group was introduced to a mysterious tool: Hedge. Our apprentice project was to build an internal tool using Hedge as a cloud platform abstraction library and a deployment tool. After our apprentice program had come to an end, I was offered an opportunity to continue the development of Hedge in an internal project. Because we had already used Hedge I had some experience as a framework user. The task I was offered was to implement basic Amazon Web Services(AWS) support into Hedge.

What is this Hedge you are talking about?

Hedge offers automated serverless function code generated from common ring compatible handler methods, libraries for various levels of abstractions, and a set of common commands to build handlers, create artifacts, and to deploy created artifacts. Hedge is an open source software and is available in our github.

“Hedge is a platform agnostic ClojureScript framework for deploying ring compatible handlers to various environments with focus on serverless deployments.”

Hedge developer

Back to my journey with Hedge development

The scope of platform agnostic in Hedge was uncertain for me when I started the development. First, I implemented a simple feature parity for AWS code creation and deployment with Serverless Framework. Later the definition of the platform agnostic, roadmap, and context of Hedge became more clear.

When I started working with Hedge I had little Clojure experience and almost no hands-on experience with AWS. For AWS deployment I had checked how other tools do deployment and had chats with my peers. These information sources gave good advice on how to handle deployment.

Why Clojure(Script)? ClojureScript in backend?!

Clojure(Script) is popular and it is in high demand in Finland and at Siili Solutions. Using ClojureScript gives access to ClojureScript and JavaScript libraries. The functional paradigm has steep learning curve, but after learning the basics it is easy to realize that immutable data fits well with serverless handlers. Multithread-safety adds more security if one of the supported clouds is re-using processes with multiple threads.

One of the Clojure build tools, Boot, is also extensively used with Hedge. Boot tasks are great for creating and chaining commands. It is fast and easy to develop a set of tasks which for example build input files, create all artifacts, and finally deploy artifacts to cloud. Then those tasks can be combined into one large task which does all with one command or uses small tasks to store artifacts into disk and later deploys artifacts to cloud.

Why platform agnostic framework?

Following code snippets reveals the first problems with current serverless platforms:

AWS function handler example

'use strict';
module.exports.hello =
(event, context, callback) =>; {
  console.log(’log msg!')
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      message: 'Go Serverless!',
      input: event,
 callback(null, response);

Azure function handler example

'use strict';
module.exports.hello =
(context, req) => {
  context.log(’log msg!');
  context.res = {
    // status: 200, /* Defaults to 200 */
    body: {
      message: 'Go Serverless!',
      input: event,

Above code snippets are simple Hello World serverless function handlers but the problem is already visible.

  • Handler function signatures(line 3)
  • Logging(line 4)
  • Output(line 5) handling(line 12)
  • Exit condition signaling(line 12)

all are different.

The main goal of Hedge is platform agnosticism: once the code and configuration has been written for one cloud platform, it is re-deployable to another cloud provider by changing deployment command and a few configuration directives. This improves code re-usability and limits the risk of being locked with specific cloud provider.

To make the implementation easier, Hedge supports only a small common subset of provided cloud features. Limiting number of supported features might be a risk and might lower acceptance rate amongst developers. It is unknown if developers will use a framework which allows using only a small fraction of cloud features. The small number of features definitely narrows down the creativity of developers and might also limit usage of Hedge for some projects.

Our implementation

Hedge adds abstraction layers for function handler code, infrastructure and deployment. Following code snippets are examples of abstraction layers of current development version.

Unified Hedge handler example

(defn hello
  (info "log msg!")
  {:status 200
   :body "Go Clojure!"})

Above example shows how Hedge unifies function handler features:

  • Handler function signature(lines 1 & 2)
  • Logging (line 3)
  • Payload creation and emit(lines 4 & 5)
  • Function exit condition signalling(line 5)
  • Persistent storage and queues(Still in WIP list)

Infrastructure abstaction

{:api {
       "hello-json" {:handler handler.core/hello-json}
       "calc" {:handler handler.core/calc}
       "fail-hard" {:handler handler.core/fail-hard}}
 :timer {"timer" {:handler handler.core/hello
                  :cron "*/15 * * * *"}}}

Infrastructure abstraction is done as EDN configuration file. Hedge will create platform specific templates from EDN.

Deployment abstraction

$ boot help
 aws/deploy                  ** Build and deploy function app(s) **
 aws/deploy-from-directory   ** Deploy files from directory. **
 aws/deploy-to-directory     ** removed for readability **

 azure/deploy                  ** Build and deploy function app(s) **
 azure/deploy-from-directory   ** Deploy files from directory **
 azure/deploy-to-directory     ** removed for readability **
$ boot aws/deploy -n my-stack
Deploying to AWS
Stack is ready!
API endpoint base URL :

cloud-abstractionAbove snippet is example how to run Hedge. Help command lists all available command and simple aws/deploy or azure/deploy commands can be used to deploy project to selected cloud.

Deployment is abstracted with boot tasks. Identical set of command can be used for build, artifact creation and deployment.

Hedge HTTP function handlers resembles Ring handers which is well-known abstraction for HTTP in Clojure. Hedge reads user-supplied configuarion files and serververless function hander then creates cloud-specific wrapper code between a user supplied handler function and cloud’s native handler entry points. The wrapped code handles differences between cloud providers and in the future, there will be libraries for the rest of code abstraction.

Hedge is currently under heavy development and some of the listed features are still on roadmap.

Lessons learned

During this project I had to learn how create Cloudformation templates and stacks from template. Cloudformation templates for lambda functions with API Gateway endpoints first seemed overwhelming, but AWS SAM simplified template creation. AWS SAM is still a young technology and I did not find any big projects using it.

I had to find one important piece of information from StackOverflow. Cloudformations templates using SAM must be deployed using change sets. Luckily it was documented there, since otherwise I probably would have spent a lot of time debugging a feature which was not supposed to work.

As the documentation of SAM is still vague and mostly in GitHub only, I personally can recommend using SAM for serverless Cloudformation templates.

See also:
“Modern Application Development: Should you skip microservices and go directly to serverless?”

A Guidance Framework for Architecting Portable Cloud and Multicloud Applications

Multi-cloud, what are the options? – Low level abstraction libraries

Pros and Cons of a Multi Cloud approach

AWS Helsinki Meetup January 2018 slides

(<3 siili_ clojure) – hope you do too!

Cigars and Serverless IoT

Say what? Cigars and Serverless? What on earth could those two have in common? Maybe not much but bare with me and I’ll let you know.

The background

Some time ago I happened to run into a new Finnish open-source sensor beacon platform. I really wanted to give it a try. What could I do with it? Enter cigars! I came up with a requirement and created a user story that I wanted to implement: “As a cigar owner I want to be able to monitor the temperature and humidity of my humidor regardless of my location so that I know when to add purified water into the humidifier“. My current solution required me to open the humidor and check the meter inside it.

Screen Shot 2017-10-17 at 10.04.16

The brainstorm

Two important non-functional requirements were wireless communication from the humidor so I wouldn’t have to do any physical modifications to the humidor and a long battery life so I wouldn’t have to replace it too often. Both were met by the chosen sensor.

So what else would I need to make it happen? A place where I could process and save the sensor data and visualize it for devices regardless of their (or my) location. Enter cloud! I also had a couple of Raspberry Pi boards lying around so when the sensors arrived I was good to go.

AWS has an IoT service that can listen to messages from things (as in Internet of Things) you have registered to it. You can then do whatever you wish with those messages: save them into DynamoDB, process them with Lambda, forward them to Kinesis etc. Just what I needed. Enter serverless! Have to say I was very exited. I had never done any IoT stuff before so this was going to be a learning experience for me as well.

The solution

First thing I wanted to do was to read the sensor from the RasPi. A little bit of web surfing revealed a small but enthusiastic community around the sensor and I found a python script doing exactly what I wanted. The communication technology would be BLE.

Next thing was to connect the RasPi to AWS IoT service. That was also a no-brainer thanks to AWS documentation. AWS creates certificates and keys for the thing to be authenticated with and an endpoint for the messages. AWS processes the incoming messages with Rules. A rule defines a query that parses the incoming message and action(s) to be performed. The thing publishes it’s messages into a named MQTT topic via the given endpoint and the rule is a subscriber to the same topic.

I chose to save the data into DynamoDB by my IoT rule and implement a serveless website using S3 and Lambda. S3 is an object storage that is perfect for hosting static html files and Lambda is a compute service to run code without having to worry about any infrastructure. My Lambda function fetches the data from DynamoDB table and is called by ajax from html through API gateway.

Finally I wanted the lightest and the simpliest javascript graph library to visualize the sensor data from my humidor. See the screenshot  above. A bit boring graph I know. But luckily it is not an EKG!

At the time of writing this the data is flowing once every 15 minutes from the sensor into DynamoDB and it is read from there by Lambda whenever the html page is loaded. Maybe I’ll implement some alarms next?

Screen Shot 2017-10-17 at 8.27.36

What did I learn?

First of all I learned once again that serverless services are extremely fast and easy to implement for example for prototyping. They enable individuals and businesses to do things that have been impossible or at least expensive in the past. They also make it easy to explore new ways of doing thing and doing business. My rough cost estimation for this solution is 1-2€ per month after the AWS free tier has been eaten. So it is fast, easy and cheap as well.

Secondly I learned a lot about how DynamoDB works. There was quite a few tricks on the way. For example is allows you to set a TTL attribute for a field containing epoch seconds as a string but it won’t do anything.

AWS Services: IoT, Lambda, S3, DynamoDB
Sensor: Ruuvitag
Hardware: Raspberry Pi
Source and more details: Github

Serverless 101 and Siili CraftCon

The second official Siili CraftCon was held before summer holidays 2017. It is an internal craftmanship conference for all craftmen and -women in Siili. This time it was half days and three tracks worth of pure skillz with topics such as “how to be a tech lead“, “data driven design“, “RPA” and more.

I had the pleasure of speaking about serverless architecture to the whole crowd as a closing presentation. Since I am a keen agile/lean fan I am also totally in love in serverless architecture and everything it has to offer in terms of reacting to changes and feedback and the ease of implementing new features and trying out new things.

Serverless means that you only need to think of you business logic. Everything else is taken care of by your chosen vendor. All big name vendors have their own serverless platform and services. In this context I am talking about PaaS (Platform as a Service) and FaaS (Function as a Service) side of serverless.

Some may include SaaS and (m)BaaS solutions into serverless context. SaaS stands for Software as a Service and like the name implies it includes software you configure for your needs. Some examples are Google Apps, Dropbox and Slack. (m)BaaS is (mobile) Backend as a Service and it provides some backend services, such as authentication, mainly for mobile applications.

FaaS is a subset of PaaS and means you write your function in you chosen language (or in a language that is supported by your chosen vendor) and deploy it. You also have to configure how the function is called. It can listen to events or can be triggered by an http request via an API gateway among other ways. Your vendor takes care of scaling it to your needs and you pay only for execution time. Wikipedia explains FaaS as “A category of cloud computing services that provides a platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure“.

PaaS includes lot more than just FaaS. Widely available PaaS services include messaging, databases, big data, analytics, file storage etc. They all are services you launch and configure. You can insert your FaaS function in a PaaS workflow and use all other available services with it. Again your vendor takes care of your infrastructure needs like scaling and backups and you pay for what you use. Wikipedia explains PaaS as “A category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure“. See how it differes from FaaS?

In short, serverless means you are responsible only for your code and your data.

In Siili we have a lot of internal serverless development also on top of all the fancy stuff we create for customer. We also have cloud sandboxes freely available for all Siilis for learning and trying out serverless stuff.