Don’t go to conference!

The title is from a pitching session in Continuous Integration and Testing -conference (CITCON18) by Open Information Foundation. The announcer of the pitch wanted to create discussion about benefits of traditional conferences versus Open Space events. CITCON itself represents an Open Space format event, which is an awesome way to share ideas and networking for IT professionals.

CITCON gathered in the spring of Vienna, Austria, this time with around 60 people, of whom six were Hedgehogs from our Software Automation Tribe. Open Space events are most effective with 50-100 people, since the sessions are organized on the spot in an agile manner. The participants are mostly software and testing specialists, but the topics contain more human sciences than you’d find in traditional IT conferences.

The event starts with a quick introduction of participants. Followed by inventing short pitch talks about the topic for upcoming discussion sessions. You are allowed to present one pitch at a time, but can you join the line of presenters again after you are done with your first pitch. People are also encouraged to ask clarifying questions if the idea is hard to grasp. Post-It notes with the topic candidate are added on a whiteboard after a pitch is made. Each session is meant to last one hour.

Tommi Oinonen pitching his session topic

When the line of presenters had finished, the amount of Post-It notes had grown up to around 50. Next, everyone could vote on the topics their are interested in with a pencil, but only one vote at a time. There were five time slots and five rooms booked for the sessions next day. Hence, there were 25 sessions in total. When all the voters had visited the whiteboard, the most popular topics began to move to a timetable chart. This happened also in a self-organizing way, meaning that everybody could move the notes and the most voted ones found their time slots. Similar topics could end up bundled into the same session. The final timetable might not have been formulated earlier than just before the sessions the next day.

The time table is then used by people to plan their attendance to different sessions. However, the most important rule with Open Space sessions is the law of two feet: If the current session isn’t interesting, leave it.

Agile timetable

The sessions were held in classrooms, where chairs were organized in a circular form in order to maintain best possible connection between participants. The classroom also contained a white board for demonstrating purposes. The topics varied from more technical software problem solving in IT to sea shanties (!?). The most popular topics were about communication skills in work life. These sessions were facilitated by professionals, who also solve those issues in their everyday work. The session I liked the most was one where communication related challenges were asked from the participants and then solved via a role playing style dialog.

As an example of a more technical session topic, Tommi Oinonen from Siili wanted to obtain insights about his master’s thesis pertaining to metrics of software test automation and version control. Tommi truly achieved some good philosophical conversation and opinions to bring home from CITCON.

If I had to list some negatives about this Open Space event, I felt that people were even too eager to create session topics. Some general level subjects without deeper focus on some strictly defined problem ended up leading conversations into an academic monologue.

Open Space events encourage people to create sessions of their own kind where everyone can have interesting discussions. Compared especially to lecture-liked conferences, interactions between even shyer engineers increases. As an organizer encapsulated in his opening speech: If you did not get what you were looking for here, blame the organizers, that means yourself!

Pekka Rantala

A Thought on Writing Tests That Suck Less


I visited the ClojureD conference in Berlin on 24.2.2018 during some cold winter days. I went with my colleagues to gather some insights and listen to interesting talks on different subjects around Clojure. As a result I decided to challenge myself (and tests I write), inspired by Torsten Mangners good presentation titled:
Writing tests that suck less – “What” vs “How”.
Disclaimer: This post takes no credit whatsoever on the subject or claims to be a best way of doing things. This text relies on Torsten Mangners presentation but there is also some personal interpretation.

The Gauntlet

As Torsten reasoned that writing pure unit tests in Clojure (or any other language) is usually a small task. You invoke your unit with inputs and assert the results. Tests ideally document our software (as an added side-effect of verification).
Let’s see a simple, self-documenting unit tests.
;; the implementation (the unit, the HOW)
(defn add-numbers
  (apply + numbers))

;; the test that should describe to a (technical) reader WHAT we are testing
(deftest add-two-numbers-together
  (testing "If two numbers can be added together"
    (is (= 3 (add-numbers '(1 2))))))
So it’s pretty straightforward, we don’t need to know the HOW or any other implementation details and because we are testing a pure unit with no dependencies we don’t need to prepare anything obfuscating and arcane.
Writing tests should not be an effort for a developer – but it starts to feel so when we run out of pure functions and go over to integration testing. It shouldn’t be too hard to write the actual tests – we just end up with a bunch of namespace_test.clj(s) files that can end up being many times longer than the actual code including a lot of HOW:s. Arcane and magical ceremonies, where we roll out mocks, stub and whatnot – finally ending in one or a few strange assertions.
If you jump into a project and find out that the documentation basically is the source code, you would be glad to find out that the tests would clearly tell us WHAT they test and prove. We don’t want to investigate and reverse-engineer the HOW part. We could just read what the software internals in it’s current state is documented and supposed to do.
At some point in time, as a software developer, you would need to know the HOW but that’s more of the part of writing the implementation (and the tests).

Throwing The Gauntlet

Thus I challenge myself to separate the bloat of HOW out my tests that repeatedly ended up looking like this:
(deftest handler-post-to-user-success
  (testing "calling handler posting message to user"
    (async done
      ; channel open
      (.mockOnce xhr-mocklet "POST" #".*"
        (mocked-response (clj->js {:ok true}) 200 "application/json"))
      ; post
      (.mockOnce xhr-mocklet "POST" #".*"
        (mocked-response (clj->js {:ok true}) 200 "application/json"))
        (let [response (<! (api/handler 
          {:body {"data" {"message" "testmessage"
                          "user-id" "user-id"}}}))]
          (is (utils.http/request-success? response)))           
          (is (= "post message to channel success" 
                 (-> response :body :message))))

This test mocks two successful HTTP calls, invokes the handler (entry point of the serverless function) and performs some assertions.

For those who are curious about what’s going on in here – this is a test for a serverless REST API handler in a Slack Bot backend. It’s written in ClojureScript and runs on Javascript function runtime on Azure.  See Siili Solutions / Hedge on GitHub – a serverless framework to deploy ClojureScript functions on Azure and AWS.

This test checks how our serverless API responds to clients if it succeeds when contacting the Slack API.

So is this test complex? Maybe not too complex. But when you test more complicated logic it probably will be. Also there’s lot of potential arcane ceremony that doesn’t add to test readability like async done, (done), go, <! that is purely about the HOW in my tests. And those things just repeat over and over, multiple times per test namespace.

The Dust Settles

So this was my quick try to find a general, more WHAT for my tests.

(deftest handler-post-to-user-success
  (testing-handler "calling handler, posting message to user succeeds"
    :incoming-request {:body {"data" {"message" "testmessage"
                                      "user-id" "user-id"}}}
    :ext-http-calls-return #(do (channel-open-success)
    :assert #(do (is (utils.http/request-success? %))
                 (is (= "post message to channel success"
                        (-> % :body :message))))))

Maybe it wasn’t state of the art of perfect, but I hope you see the intent. So the summary of the WHAT in my case:

  • We are testing the handler (the serverless functions entry point)
  • The input values are presented in :incoming-request
  • :ext-http-calls-return describes what is going on in the “external” dependencies during this run (mocks are instantiated)
  • :assert contains the assertions that should be done for this test

And therefore:

  • Less noise
  • Less boilerplate
  • More documentation

So I re-wrapped testing to do the things that are repeated over and over in all the tests:

(defn testing-handler [message & {:keys [ext-http-call-returns
  (testing message
    (async done
      ;; install mocks
        ;; invoke the handler
        (let [response (<! (api/handler incoming-request))]
          ;; assert
          (assert response)

And my mocks are just a different combination of successes and failures, so they were just wrapped into their own functions.


You could still reason that this could even be more simplified. Macros could help me to get rid of even more noise. But in the end of the day,  Torstens presentation at ClojureD  made me more motivated about trying to write tests that suck less. Or die trying.