Welcome to March 32nd

My wife and I both have watches with the date window - the one that shows the day of the month. At the end of March, nearly midnight, my watch’s date window switched to number “1”, which stands for April 1st. Though my wife’s watch started to show number “32” instead. I was curious why, and found out that it is because of so called outsize date mechanism.

The standard date mechanism is made of a single ring with numbers from 1 to 31 printed on it. The ring gradually rotates, and eventually switches to another number. My watch has this mechanism. The “problem” is that this way the window size, and thus the size of a number inside, is small, because the ring has to fit into the frame.

The feature that allows to have a larger “font size” in date windows goes under the name “the oversize date complication” [1]. This feature utilizes two pieces for displaying the date: units disc and tens cross, which are nicely synchronized. Tens cross has numbers from 0 to 3, and unit disc - from 0 to 9. The final day of the month is then a combination of two digits from both pieces.

This video has a nice visual explanation of the outsize date mechanism.

So why does my wife’s watch show 32? Well, it actually goes up to 39! It seems that in some “cheap” implementations of the oversize date mechanism, the tens cross and the unit disc do not have proper synchronization. After the tens cross switched from 3 to 0, indicating the beginning of a new month, the unit disc just continues to rotate further to 2, 3, 4, etc.

References

  • [1] The term "complication" means any other function on a watch other than the display of time. Read more about watch complications.

Covariance and contravariance in programming

Whenever I hear “covariant return type”, I have to pause and engage my System 2 [1] thoroughly in order to understand what I have just heard. And even then, I cannot bet I will answer properly what it means. So this serves as a memo for me of the concept of variance [2] in programming.

The notion of variance is related to the topic of subtyping [3] in programming language theory. It deals with rules of what is allowed or not with regards to function arguments and return types.

Variance comes in four forms:

  • invariance
  • covariance
  • contravariance
  • bivariance (will skip that)

Before we dive into explanations, let us agree on pseudo code that I am going to use. The > operator shows subtyping. In the example

Vehicle > Bus

Bus is a subtype of Vehicle. Functions are defined with the following syntax:

func foo(T): R

where T is a type of an argument, and R is a return type of a function foo. Functions can also override another functions (think “override of a method in Java”). Here, bar overrides foo:

func foo(T): R > func bar(T): R

Throughout the example, I will be using this hierarchy of objects.

Vehicle > MotorVehicle > Bus

Invariance

Invariance is the easiest to understand: it does not allow anything - neither supertype nor subtype - to be used instead of a defined function argument or return type in inherited functions. For instance, if we have a function:

func drive(MotorVehicle)

Then the only possible way to define an inherited function is with MotorVehicle argument, but not Vehicle or Bus.

func drive(MotorVehicle) > func overrideDrive(MotorVehicle)

Same goes for return types.

func produce(): MotorVehicle > func overrideProduce(): MotorVehicle

This way, the type system of a language doesn’t allow you much flexibility, but protects you from many possible type errors.

Covariance

Covariance allows subtypes or, in other words, more specific types to be used instead of a defined function argument or return type. Let’s start with return types. Return types are covariant. Let’s look at these two functions:

func produce(): MotorVehicle > fn overrideProduce(): Bus

Is it OK that overrideProduce returns more concrete Bus instead of MotorVehicle? Yes, it is! Since Bus is a type of MotorVehicle, it meets the contract, because it supports everything a MotorVehicle can do. So this is allowed:

motorVehicle = product()
motorVehicle = overrideProduce()

In this case, for the calling code there is no difference whether motorVehicle variable has a MotorVehicle instance or a Bus.

But what about function arguments? Is this definition fine?

func drive(MotorVehicle) > func overrideDrive(Bus)

This is actually not allowed by a safe type system, because overrideDrive breaks parent’s contract. Users of drive expect to be able to pass any type of MotorVehicle, not only Bus. Indeed, imagine someone calls drive with, let’s say a Car (where MotorVehicle > Car), then the call to overrideDrive will be overrideDrive(Car), but overrideDrive works only with Bus instances. So function arguments are not covariant. And here we approach contravariance.

Contravariance

Contravariance allows supertypes or, in other words, more abstract types to be used instead of a defined type. Function arguments are contravariant. Let’s have a look at the example.

func drive(MotorVehicle) > func overrideDrive(Vehicle)

Though it looks counterintuitive, this is a perfectly valid case. overrideDrive meets parent’s contract: it supports any Vehicle, and since MotorVehicle is a type of Vehicle, users of drive still can pass any instance of MotorVehicle.

References

Books I read in 2019

2019 is the first year I decided to track books I have read on Goodreads. Here is the list. Though, it might be not complete, because I got this idea only in December and struggled to recall all the books.

Quite clear that fiction prevails. Definitely will spend more time reading more useful books this year.

Fiction (12)

The Three-Body Problem by Liu Cixin

The Dark Forest by Liu Cixin

Death’s End by Liu Cixin

The Fifth Season by N.K. Jemisin

The Obelisk Gate by N.K. Jemisin

The Stone Sky by N.K. Jemisin

Steelheart by Brandon Sanderson

Firefight by Brandon Sanderson

Calamity by Brandon Sanderson

The Stars My Destination (Tiger! Tiger!) by Alfred Bester

Edgedancer by Brandon Sanderson

Oathbringer by Brandon Sanderson

Non-fiction (5)

Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard

The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change by Camille Fournier

A Brief History of Time: From the Big Bang to Black Holes by Stephen Hawking

Linchpin: Are You Indispensable? by Seth Godin

Software Architecture Patterns by Mark Richards

The origin of the word "maintenance"

Oh boy, here I go mistyping the word “maintenance” again! For some reason, this is the most complicated English word for me in terms of spelling. It comes out as “maintanence”, or “maintenence”, or some other variant, but almost never the correct one. Even seemingly complicated to write words like “daughter” are no-brainer for me. But whenever I am done typing “maint”, I already know I am doomed for failure.

So I decided to look up who came up with this word and when. Just, you know, to thank this person for all the pain and suffering I have endured. Though, something tells me, that would be pretty hard without a time machine. And indeed, the origins of the word “maintenance” go as far as to the 12th century.

mid-14c., maintenaunce [1]

⬇︎

from Old French maintenance “upkeep; shelter, protection”

⬇︎

from maintenir “to keep, sustain; persevere in”

⬇︎︎︎

c. 1300, maintenen, “to support, uphold, aid;” [2]

⬇︎

from Anglo-French meintenir (Old French maintenir, 12c.) “persevere in, practice continually”

⬇︎︎︎︎︎︎︎︎︎︎︎︎︎︎︎︎︎︎︎︎

from Latin manu tenere “hold in the hand,”︎︎

︎︎︎︎︎︎︎︎︎⬇︎

from manu, ablative of manus “hand” + tenere “to hold”

Now I know. I hope it helps me to finally remember how to spell it.

References

Java geospatial in-memory index

One of my recent tasks included searching for objects within some radius based on their geo coordinates. For various reasons - not relevant to this topic - I wanted to make this work completely in memory. That’s why solutions like MySQL Spatial Data Types, PostGIS or Elasticsearch Geo Queries were not considered. The project is in Java. I started to look for possible options, and, though, I found a few, they all lacked an easy to follow documentation (if at all) and examples.

So I decided to make a short description of some Java in-memory geospatial indices I’ve discovered during my research with code examples and benchmarks done with jmh.

Again, the task at hand: given a geo point, find the nearest to it object within a given radius using in-memory data structures. As an extra requirement, we would like to have arbitrary data attached to the objects stored in this data structures. The reason is that, in most cases, these objects are not merely geo points, they are rather some domain entities, and we would build our business logic based on them. In our case, the arbitrary data will be just an integer ID, and we pretend we can later fetch required entity from some repository by this ID.

Figure 1. We need to find all green points within the radius of D km from the source point S.
Figure 1. We need to find all green points within the radius of D km from the source point S.

Lucene spatial extras

I learned about Lucene while using Elasticsearch, because it’s based on it [1]. I thought: well, Elasticsearch has Geo queries made with Lucene, which means Lucene has support for it, which, maybe, also has support for in-memory geospatial index. And I was right. Lucene project has Spatial-Extras module, that encapsulates an approach to indexing and searching based on shapes.

Using this module turned out to be a non-trivial task. Except JavaDocs and source code, I could only find an example of its usage in Apache Solr + Lucene repository, and made my implementation based on it. Lucene provides generalised approach to indexing and searching different types of data, and geospatial index is just one of the flavours.

Let’s have a look at the example.

final SpatialContext spatialCxt = SpatialContext.GEO;
final ShapeFactory shapeFactory = spatialCxt.getShapeFactory();
final SpatialStrategy coordinatesStrategy =
	new RecursivePrefixTreeStrategy(new GeohashPrefixTree(spatialCxt, 5), "coordinates");

// Create an index
final Directory directory = new RAMDirectory();
IndexWriterConfig iwConfig = new IndexWriterConfig();
IndexWriter indexWriter = new IndexWriter(directory, iwConfig);

// Index some documents
var r = new Random();
for (int i = 0; i < 3000; i++) {
	double latitude = ThreadLocalRandom.current().nextDouble(50.4D, 51.4D);
	double longitude = ThreadLocalRandom.current().nextDouble(8.2D, 11.2D);

	Document doc = new Document();
	doc.add(new StoredField("id", r.nextInt()));
	var point = shapeFactory.pointXY(longitude, latitude);
	for (var field : coordinatesStrategy.createIndexableFields(point)) {
		doc.add(field);
	}
	doc.add(new StoredField(coordinatesStrategy.getFieldName(), latitude + ":" + longitude));
	indexWriter.addDocument(doc);
}
indexWriter.forceMerge(1);
indexWriter.close();

// Query the index
final IndexReader indexReader = DirectoryReader.open(directory);
IndexSearcher indexSearcher = new IndexSearcher(indexReader);

double latitude = ThreadLocalRandom.current().nextDouble(50.4D, 51.4D);
double longitude = ThreadLocalRandom.current().nextDouble(8.2D, 11.2D);
final double NEARBY_RADIUS_DEGREE = DistanceUtils.dist2Degrees(100, DistanceUtils.EARTH_MEAN_RADIUS_KM);
final var spatialArgs = new SpatialArgs(SpatialOperation.IsWithin,
										shapeFactory.circle(longitude, latitude, NEARBY_RADIUS_DEGREE));
final Query q = coordinatesStrategy.makeQuery(spatialArgs);
try {
	final TopDocs topDocs = indexSearcher.search(q, 1);
	if (topDocs.totalHits == 0) {
		return;
	}
	var doc = indexSearcher.doc(topDocs.scoreDocs[0].doc);
	var id = doc.getField("id").numericValue();
} catch (IOException e) {
	e.printStackTrace();
}

In order to use it we need:

  1. Create an index. At this step you can choose where to store the index. For our use case, there is a RAMDirectory, which is essentially in-memory storage. This class is marked as deprecated, because it uses inefficient synchronization according to the documentation. This might explain its poor performance. But we’ll come back to this later.
  2. Index some documents. To make our index support geospatial queries we need to have a field of type Shape, in particular Point in our document.
  3. Query the index. Perform a spatial operation against the index.

Oh my gosh! That’s a good deal of classes to consider! That is definitely not the winner of the contest on “The most clear and easy to use API”. Though, as you would expect, Lucene indices are the most flexible:

  • Provides various shapes implementations: point, rectangle, and circle. You can also teach it to support polygons with some additional dependency.
  • You can put any data to the indexed document along with its geo point. It means you can store the whole entity there and perform other queries supported by Lucene, for example, fuzzy text matching.
  • Supports various spartial operations: is within, contains, intersects, etc. Check SpatialOperation.
  • It has distance and other spatial related math calculations.

Jeospatial

Jeospatial is a geospatial library that provides a set of tools for solving the k-nearest-neighbor problem on the earth’s surface. It is implemented using Vantage-point trees, and claims to have O(n log(n)) time complexity for indexing operations and O(log(n)) - for searching. A great visual explanation of how Vantage-point trees are constructed with examples can be found in this article.

Figure 2. An illustration of a Vantage-point tree.
Figure 2. An illustration of a Vantage-point tree.

The library is pretty easy and straightforward to use.

// Create a custom class to hold an ID
class MyGeospatialPoint extends SimpleGeospatialPoint {
    private int id;

    MyGeospatialPoint(double lat, double lon) {
        super(lat, lon);
    }

    int getId() {
        return id;
    }
}

// Init Vantage-point tree and elements to it
VPTree<SimpleGeospatialPoint> jeospatialPoints = new VPTree<>();
for (int i = 0; i < 3000; i++) {
	final double latitude = ThreadLocalRandom.current().nextDouble(50.4D, 51.4D);
	final double longitude = ThreadLocalRandom.current().nextDouble(8.2D, 11.2D);
	jeospatialPoints.add(new MyGeospatialPoint(latitude, longitude));
}

// Get the neareset neighbor for a given point
final double latitude = ThreadLocalRandom.current().nextDouble(50.4D, 51.4D);
final double longitude = ThreadLocalRandom.current().nextDouble(8.2D, 11.2D);
var neighbor = (MyGeospatialPoint) jeospatialPoints.getNearestNeighbor(new MyGeospatialPoint(latitude, longitude), 100 * 1000);
var id = neighbor.getId();

It is much more clear than the Lucene’s example: init a VPTree, add points, perform a query. The simplicity doesn’t come without cost, of course - the library is somewhat limited in functionality and can be only to solve k-nearest-neighbor problem. Which is perfectly fine for me, because this is exactly what I needed.

As VPTree can hold only objects of GeospatialPoint type, to attach additional data to objects stored in the index we need to create another class that extends its only implementation SimpleGeospatialPoint and holds required data. Pay attention that getNearestNeighbor accepts as a second argument the distance in meters.

More info can be found in the official GitHub repository.

The Java Spatial Index

The Java Spatial Index is a Java version of the R-tree spatial indexing algorithm as described in the 1984 paper “R-trees: A Dynamic Index Structure for Spatial Searching” by Antonin Guttman [2].

Figure 3. An example of an R-tree for 2D rectangles. Image courtesy of Wikipedia.
Figure 3. An example of an R-tree for 2D rectangles. Image courtesy of Wikipedia.

The main element behind this data structure is a minimum bounding rectangle . The “R” in R-tree stands for rectangle. Each rectangle describes a single object, and nearby rectangles are then grouped in another rectangle on a higher level [3]. That’s a lot of rectangles in one sentence!

Alright, enough jokes, let’s have a look at the code example:

final RTree rtree = new RTree();
rtree.init(null);

// Index some points
var r = new Random();
for (int i = 0; i < 3000; i++) {
	final double latitude = ThreadLocalRandom.current().nextDouble(50.4D, 51.4D);
	final double longitude = ThreadLocalRandom.current().nextDouble(8.2D, 11.2D);
	final var rect = new Rectangle((float) latitude, (float) longitude,
								   (float) latitude, (float) longitude);
	rtree.add(rect, r.nextInt());
}

// Perform a query
final float latitude = (float) ThreadLocalRandom.current().nextDouble(50.4D, 51.4D);
final float longitude = (float) ThreadLocalRandom.current().nextDouble(8.2D, 11.2D);
final var s = new Point(latitude, longitude);

final var distDegree = (float) DistanceUtils.dist2Degrees(100, DistanceUtils.EARTH_MEAN_RADIUS_KM);

final var atomicId = new AtomicInteger();
rtree.nearest(s, v -> {
	atomicId.set(v);
	return true;
}, distDegree);

var id = atomicId.get();

To be honest, the code was a bit confusing for me to write, because:

  1. Rectangles everywhere. We need to index geo points, but the library supports only rectangles, so we have to create a “fake” rectangles with the same latitude and longitude for both corners.
  2. Distance argument in “nearest”. The distance argument to rtree.nearest is a spherical distance in degrees. Here we convert 100km to degrees using Lucene’s DistanceUtils class :) Or, you can just put 0.89932036.
  3. Usage of AtomicInteger. This is required, because rtree.nearest requires a lambda for each result. To use a variable inside a lambda it has to be effectively final, which does not allow us to change it, but we change an object’s state. Yeah, whatever.

You can find more examples in the repository.

Benchmarks

I ran some benchmarks with all of above implementations. Worth noting, that I measure only querying performance, and not indexing. The reason is that my application should be optimized for read load, and it is totally fine if building indices takes some time. Of course, you can easily adjust the code to benchmark also the indexing phase.

In preparation step, we create 3000 random geo points and store them in the index. During the benchmark itself, we perform a query against the index to find the nearest neighbour within 100 km. The full source code for benchmarks you can find in my GitHub repository. Here are the results.

java -jar target/benchmarks.jar -foe true -rf csv -rff benchmark.csv
...
Benchmark                     Mode  Cnt       Score        Error  Units
MyBenchmark.benchJeospatial  thrpt    3   99235,728 ± 132352,814  ops/s
MyBenchmark.benchJsi         thrpt    3  309795,462 ± 202296,631  ops/s
MyBenchmark.benchLucene      thrpt    3    1199,864 ±   1199,138  ops/s

And visual representation.

To be honest, I was kind of surprised to find out, that Lucene performed so badly. It could be because a) its RAMDirectory is just slow or b) I cannot cook it. My guess - some misconfiguration, though I could not figure out what was wrong. I asked a question on StackOverflow, but so far no answers.

References

Enabling trace logging for Elasticsearch REST Client with Logback

Recently I had some issues with Elasticsearch - all requests were failing with “bad request” error. In order to understand what was wrong with these requests, I, natually, decided to enable debug/trace logging of for ES REST Client, but couldn’t find out how. Partially, because the official documentation on this topic could have been more informative, to be honest. But mainly, because my project uses Logback and the REST Client package uses Apache Commons Logging.

This article is a short summary of how I’ve eventually managed to enable tracing with Logback. The patient under inspection is Elasticsearch 6.3 with its Java Low Level REST Client.

According to the official documentation, we need to enable trace logging for the tracer package. If you are interested, you can check the source code for org.elasticsearch.client.RequestLogger class, where the logger with this name is defined:

...
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
...
private static final Log tracer = LogFactory.getLog("tracer");
...

As you can see, enabling this logger with TRACE level in Logback is not enough, because, again, the client uses Apache Commons Logging.

Luckily, Logback was designed with this use case in mind, and provides a set of bridging modules [1]. They allow us to use Logback even with other dependencies that rely on other logging API. In particular, we’re looking for jcl-over-slf4j.jar.

So, here are the steps.

Require jcl-over-slf4j.jar. The dependencies section for Gradle:

dependencies {
    implementation('org.slf4j:slf4j-api:1.8.0-beta2')
    implementation('ch.qos.logback:logback-classic:1.3.0-alpha4')
    implementation('org.slf4j:jcl-over-slf4j:1.8.0-beta2')
}

Exclude commons-logging.jar. The details of why are described in the Logback docs here.

dependencies {
    configurations.all {
        exclude group: "commons-logging", module: "commons-logging"
    }
}

Enable tracer logger in Logback configuration.

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>[%d{ISO8601}] [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="WARN">
        <appender-ref ref="STDOUT" />
    </root>

    <logger name="tracer" level="TRACE" additivity="false">
        <appender-ref ref="STDOUT" />
    </logger>

    <!-- Additionally, you can also enable debug logging ofr RestClient class itself -->
    <logger name="org.elasticsearch.client.RestClient" level="DEBUG" additivity="false">
        <appender-ref ref="STDOUT" />
    </logger>
</configuration>

Voilà! Enjoy your debugging session!

References

Missing code review advice

We all know that code reviews are important and have a lot of value. There is plenty of “best practices” articles telling you how you should do a code review, when, at which pace, on which Moon cycle, which SMARTass criterias to prepare, etc. [1] and [2].

But I believe they miss one piece of advice.

There are two types of changes you as a code reviewer can propose to do: the ones that involve some huge effort and the ones that do not. The examples of the former are architectural changes, missing functionality, wrong interpretation of requirements, etc. Among the latter are code style issues, typos, redundant comments, missing type hints, obvious small bugs, etc.

Whenever you see these small issues, instead of writing a comment to fix it, just switch to the branch, fix it yourself and push to the repository.

This approach has several benefits. First, you save a lot of time for both of you: the developer who submitted a merge request does not need to be distracted by another notification, and you do not need to re-check the change later in the next round.

Second, it reduces the amount of those ping-pong code review rounds.

Third, you immediately feel responsibility for this code, which is good, because in the end it is not someone else’s code - it is your code also. You may be the one to fix a bug in it next week.

References

Sometimes you are no one without your phone

We become more and more dependent on all sorts of technology and gadgets in our lives. Exactly for this reason, I try to minimize the amount of digital things I interact with daily.

All I have is my laptop and, of course, my smartphone. And it is my smartphone that made me recently feel completely helpless.

One day I decided to try a scooter sharing app.

I took a scooter and had a 10-5 min drive to a subway station, where I wanted to switch and go home. The ride was really great. But when I arrived I had to end my rent. Via the application.

For several minutes I was getting a Bluetooth connectivity error and could not stop the rent. And then my phone’s battery just died. One second I had ~20% of a battery, and another - the screen is black.

The scooter is still on me. It’s half past nine in the evening, and here in Berlin it means that almost everything is closed, and there is no place to charge a phone. And I also did not have a power bank.

I asked some guys to call the company’s hotline. They did, but the phone number on there site… did not work. Ha-ha. It turned out they had some problems with the line this day.

I was pressing all the buttons on the scooter in hope there was some hidden combination to manually stop the rent, when another guy approached me and asked if I had any problems.

It turned out that the guys was also a software engineer. Though, no surprise here: if you see a young guy with a backpack in Berlin, chances are really high he is a developer. Though, I guess not as high as in Bay Area, but we’re getting there. Especially in rent prices [1].

He also previously worked for the company which scooter I was using. Unfortunately, he did not know any special tricks, and we started to think logically together.

He suggested to install an app on his phone, so I could sign in and end the rent.

Very nice of him, but I did not know my password, because I am using 1Password [2].

He suggested to reset a password via e-mail.

Well, I know the password for my e-mail account, but I am using two factor authentication and get one-time passwords… from the app on my phone. I also did not have them printed out.

A bit later, he started to call his former colleagues in order find out what to do. While we were on the phone, the magic happened - the scooter flashed a few times with the light and turned itself off. Turns out if you do not touch the scooter for ~5 minutes, it automatically switches to a parking mode. I still had to write them an e-mail from home, that the scooter was on me, and blah-blah.

The company behaved well later: they charged me only the standard price and even gave me a free ride.

The result of the evening? Next day I bought a power bank and printed out my one-time passwords for e-mail account. I do not want to experience this helplessness anymore.

References

Building a Slack command with Go

This post is a step by step tutorial on how to build a simple Slack command with Go.

  1. So, what are we going to build?
  2. Anatomy of a Slack command
  3. Local development with ngrok
  4. Slack application and command
  5. Source code overview
  6. Running the app with Docker
  7. Deploying the app to Heroku

So, what are we going to build?

By the end of the tutorial, we’ll have cowsay <message> command, that formats the message in the same way as it’s Linux counterpart. It actually uses the Linux’s utility to do the job, so it basically just a wrapper with HTTP interface. The final result will look like that:

Anatomy of a Slack command

Before going into the implementation, let’s have a look at how Slack commands work, what we need to implement, and how all the parts will communicate with each other.

I know, I know. My drawing skills are awesome. But back to the diagram. Nothing fancy here:

  1. A Slack client sends a command, in our case /cowsay Some text here.
  2. Slack servers accept the command and do their magic. We care only that they then prepare a request in a defined format and send it to our application server.
  3. This is where we come into play - we basically need to write the application server, that will process requests from Slack servers.
  4. And respond back to Slack servers.
  5. Slack servers proxy our response from the application server back to the client, which…
  6. … displays the result to the user.

Local development with ngrok

As you can see from the diagram above, in order for our command to operate, Slack needs to send a HTTP request to some endpoint, which means that our application should be available on the Internet. This is not a problem once the application is deployed somewhere. But during the development phase we need our local instance be available for Slack. This can be done with ngrok. It lets you expose a local server to the Internet. Once started, it will provide you with a publicly available URL of your local server.

So, download and install ngrok first. Then run it:

$ ngrok http 8080

Here we tell ngrok, that our server is running on port 8080 (not yet actually, but it will). If everything is OK, you’ll see a similar output:

ngrok by @inconshreveable                                 (Ctrl+C to quit)

Session Status                online
Version                       2.2.8
Region                        United States (us)
Web Interface                 http://127.0.0.1:4040
Forwarding                    http://502a662f.ngrok.io -> localhost:8080
Forwarding                    https://502a662f.ngrok.io -> localhost:8080

Connections                   ttl     opn     rt1     rt5     p50     p90
                              1       0       0.00    0.00    0.42    0.42

HTTP Requests
-------------

Pay attention to the URL, https://502a662f.ngrok.io, in Forwarding section: we’ll need to specify it in our Slack command configuration interface later.

Also, this URL is temporary, meaning that if you stop ngrok now (or close a terminal window, for example), on the next start you’ll get another URL. So, leave a terminal window with ngrok open for the duration of the tutorial.

Slack application and command

It’s time to do some clicky-clicky thingy: we need to create in Slack a workspace, an application, and a command. Go to create page and follow the instruction to register and create a new workspace.

After you’re done, go to the list of your applications and hit Create New App. There you have to specify your app name (doesn’t really matter) and select, which development workspace this app should be created in. Choose wisely! Choose your newly created workspace. For me it looks like this:

Now go to the application settings. Here you can configure all aspects of the application, for example, change icon under Display information. For us, the most interested part now is under App Credentials, where you can find a Verification Token:

This token is used to verify, that HTTP requests to our server are actually coming from Slack. We’ll use it later in our source code.

The final step in Slack administration interface is to create a slash command. While you’re in the application setting, hit the Slash Commands in the left menu and then Create New Command. Here is what we need to enter:

Pay attention here to the Request URL: this is the URL provided us by ngrok from Local development with ngrok step.

Source code overview

At last. It’s time for source code. In essence, our application is just a wrapper around cowsay utility with HTTP interface. It accepts POST requests and returns formatted text back. Full source code can be found in the GitHub repository.

Let’s review the startup procedure:

var (
	port  string = "80"
	token string
)

func init() {
	token = os.Getenv("COWSAY_TOKEN")
	if "" == token {
		panic("COWSAY_TOKEN is not set!")
	}

	if "" != os.Getenv("PORT") {
		port = os.Getenv("PORT")
	}
}

func main() {
	http.HandleFunc("/", cowHandler)
	log.Fatalln(http.ListenAndServe(":"+port, nil))
}

By default, the server will listen on port 80, but it can be changed by setting the PORT environment variable. The name of the variable is not random - this is a requirement from Heroku. The COWSAY_TOKEN must be set. This is a Verification Token from the Slack application and command step. It’s a secret value, that’s why we don’t put it to any configuration file. The alternative would be to pass it as an argument, but keeping secrets in environmental variables is a common practice.

Now, let’s have a look at the cowHandler function:

func cowHandler(w http.ResponseWriter, r *http.Request) {
	if r.Method != "POST" {
		http.Error(w, http.StatusText(http.StatusMethodNotAllowed), http.StatusMethodNotAllowed)
		return
	}

	if token != r.FormValue("token") {
		http.Error(w, http.StatusText(http.StatusUnauthorized), http.StatusUnauthorized)
		return
	}

	text := strings.Replace(r.FormValue("text"), "\r", "", -1)
	balloonWithCow, err := sc.Cowsay(text)
	if err != nil {
		log.Println(err)
		http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)
		return
	}

	jsonResp, _ := json.Marshal(struct {
		Type string `json:"response_type"`
		Text string `json:"text"`
	}{
		Type: "in_channel",
		Text: fmt.Sprintf("```%s```", balloonWithCow),
	})

	w.Header().Add("Content-Type", "application/json")
	fmt.Fprintf(w, string(jsonResp))
}

Here is what’s going on:

  1. We allow only POST requests. Everything else will result in 405 HTTP error.
  2. We validate that requests come from Slack by checking the token. It must be equal to what we set in COWSAY_TOKEN.
  3. The main job is done by sc.Cowsay(text): it wraps the text from the request with cowsay utility. We’ll get to it later.
  4. We prepare the response and return it as a JSON string. The response object in our case has two keys: response_type and text. The text is, well, the response text. The response_type: "in_channel" tells a Slack client to show the response from the command to everyone in the channel. Otherwise, only the one who issued the command would see the response (it’s called Ephemeral response). Read more about it here.

Now let’s see how sc.Cowsay(text) works:

func Cowsay(text string) (string, error) {
	cmd := exec.Command("/usr/games/cowsay", "-n")
	stdin, err := cmd.StdinPipe()
	if err != nil {
		return "", err
	}

	io.WriteString(stdin, text)
	stdin.Close()

	out, err := cmd.CombinedOutput()
	if err != nil {
		return "", err
	}

	return string(out), nil
}

It just executes cowsay and passes the text (the message we entered in the Slack client) to its standard input, and returns the formatted text back. Notice that we specify the full path to the executable /usr/games/cowsay. It we wanted to run this locally, we would need to make sure, that this binary existed under this path, but it’s hard to distribute our program then across computers, because the cowsay binary must be under the same full path. This is exactly why we’re going to distribute our application as a Docker container, where can provide predictable and fully reproducible environment.

Running the app with Docker

If you’re not familiar with Docker, then I suggest to read first an introduction article - here I’m not going into the internals. So, the Dockerfile:

FROM golang:1.9

RUN apt-get update && apt-get install -y cowsay

ADD . /go/src/github.com/kalimatas/slack-cowbot

RUN go install github.com/kalimatas/slack-cowbot/cmd/cowbot

CMD ["/go/bin/cowbot"]

Here we:

  1. Install cowsay. It will then be under /usr/games/cowsay.
  2. Copy the source from the current directory to /go/src/github.com/kalimatas/slack-cowbot.
  3. Install the binary to /go/bin/cowbot.
  4. Tell the Docker to use this binary (our server) as a command to start the container.

To build the image execute this command in the source directory of our application. Keep in mind that you’ll need to use another namespace, not kalimatas, because it’s mine :)

$ docker build -t kalimatas/cowbot .
// ... some Docker output

Now we have our image with the latest tag, and we can finally run the application locally:

$ docker run -it --rm --name cowbot -p 8080:80 -e COWSAY_TOKEN=<your_verification_token> kalimatas/cowbot:latest

A few things to pay attention to:

  1. -p 8080:80 tells Docker to proxy the port 80, which is the default for our application, to 8080 of the local machine. You can use a different port locally, but make sure, that this is the same port you specify when run ngrok http 8080.
  2. -e COWSAY_TOKEN=<your_verification_token> sets the environment variable that will be read by our application later with token = os.Getenv("COWSAY_TOKEN").

Now the application is running and available on our local machine on port 8080. Let’s validate:

$ curl -XPOST https://502a662f.ngrok.io -d 'token=<your_verification_token>&text=Hello, cow!'
{"response_type":"in_channel","text":"``` _____________\n\u003c Hello, cow! \u003e\n -------------\n        \\   ^__^\n         \\  (oo)\\_______\n            (__)\\       )\\/\\\n                ||----w |\n                ||     ||\n```"}

Notice the URL we used here - https://502a662f.ngrok.io. This is the publicly available URL provided to us by ngrok. It means, that our application is actually available on the Internet, and you can even test the command in your Slack client!

But the magic will work only until we stop ngrok, or docker container, or just shutdown the computer. We need our application to be permanently available, that’s why we’re going to deploy it to Heroku.

Deploying the app to Heroku

First, create a free account, then install Heroku CLI utility.

Log in with your account:

$ heroku login
Enter your Heroku credentials:
Email: kalimatas@gmail.com
Password: *********************
Logged in as kalimatas@gmail.com

Now we’re ready to continue. Here you can find the documentation on how to deploy Docker-based app to Heroku. The plan is: create a Heroku app, tag our Docker image and push it to Container Registry.

Log in to Container Registry:

$ heroku container:login
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

Create a new application:

$ heroku apps:create
Creating app... done, ⬢ guarded-island-34484
https://guarded-island-34484.herokuapp.com/ | https://git.heroku.com/guarded-island-34484.git

Here guarded-island-34484 is a randomly chosen name, and https://guarded-island-34484.herokuapp.com/ is the URL where the application will be available. As you might have guessed, we’ll need to update our Slack command settings, in particular Request URL, and set this URL. Check Slack application and command section for details.

Now we need to push our image to Heroku’s Container Registry. Heroku requires some spefic tag format, i.e. registry.heroku.com/<app>/<process-type>, where <app> is the application name, and <process-type> is, in our case, web. For more information check out this page.

Let’s tag and push our already existing kalimatas/cowbot:latest (it is probably different for you, if you have chosen another namespace) Docker image with a required tag:

$ docker tag kalimatas/cowbot registry.heroku.com/guarded-island-34484/web
$ docker push registry.heroku.com/guarded-island-34484/web
The push refers to a repository [registry.heroku.com/guarded-island-34484/web]
// ... other Docker output

If you open the application’s URL now in browser, it will not work:

$ heroku open -a guarded-island-34484

This will open a new browser tab and you will see an error message Application error there. It happens, because our application requires the COWSAY_TOKEN environment variable to be set: check the init() function from Source code overview section. And we can prove it by reading the application’s logs:

$ heroku logs -a guarded-island-34484 | grep COWSAY_TOKEN
2017-09-15T06:56:37.477909+00:00 app[web.1]: panic: COWSAY_TOKEN is not set!
// ... other output

Obviously, we don’t have in Heroku by default - we need to set it. This is done via application’s configuration:

$ heroku config:set -a guarded-island-34484 COWSAY_TOKEN=<your_verification_token>
Setting COWSAY_TOKEN and restarting ⬢ guarded-island-34484... done, v4
COWSAY_TOKEN: <your_verification_token>

If you open the application now with heroku open -a guarded-island-34484, you will see another error Method Not Allowed, but this is expected, because we only allow POST requests.

Let’s validate, that the app is available by its public URL:

curl -XPOST https://guarded-island-34484.herokuapp.com/ -d 'token=<your_verification_token>&text=Hello, cow!'
{"response_type":"in_channel","text":"``` _____________\n\u003c Hello, cow! \u003e\n -------------\n        \\   ^__^\n         \\  (oo)\\_______\n            (__)\\       )\\/\\\n                ||----w |\n                ||     ||\n```"}

Amazing! Now don’t forget to set this URL in Request URL in your slash command’s settings in Slack admin interface!

Finally, open a Slack client, log in with your account, and start typing the name of the command - you will see a hint:

And the result:

Bedtime ideas

Bedtime ideas

Ah, bedtime. It’s time to close Reddit and have some rest. But not for your brain. Have you ever wondered why you’re so damn.. ehh.. creative before sleep? Why does your brain start to produce this stream of crazy ideas? I have, so I made a small research on the topic.

It turns out, that it has nothing to do with the time of day (well, a bit), but rather with your energy cycles. Let’s break it down.

So you have all these pieces of information and relations between them in your brain, which we may call knowledge. How do you produce a new idea? Well, it’s easy - you somehow have to construct a new connection in your head between already existing facts. But during the day, when you’re rested and focused (OK, let’s pretend it’s true) on performing some task, your smartass brain is able to reject to accept your shiny new connections, because they just seem awful. A phone without buttons? Are you fucking crazy?!

But when you’re tired it’s not longer the case. You lay down, try to relax, shut down most of incoming data, and your little grey friend, unable to concentrate anymore properly, starts to wonder in an unexplored land of unrelated things. What seemed insane - became kinda plausible. And bang! A new relation in your head has been born.

Why does it happen right before falling asleep? Just because our energy cycles are determined by our social life - we’re rested in the more morning and tired in evening. Normal people are at least.