Lessons learned from making a SaaS* completely serverless**

* Software as a Service

** serverless as in everything runs on AWS lambda.

Short summary

I recently launched TweetScreenr. Going completely serverless kept the cloud costs low during development. I used the serverless framework to deploy my python flask API end points as AWS lambda functions. However this slowed down the development speed and I ran into tricky issues and limitations.

The full story

I recently launched TweetScreenr, a service that would create a personalized news feed out of your Twitter timeline, and I decided to use a completely serverless stack.

Why I decided to go serverless

I decided to re-write my app as serverless in an effort to avoid issues I faced in the past with regular AWS EC2 instances. Skip this section if you do not care about my motivation behind switching to serverless. Summary – I thought it would be cheaper and will require less babysitting.

I had launched the same service (minus some nice features) under a different name a year ago. It was a regular python flask web app with sqlite as the database and rabbitMQ as the message broker. I wasn’t expecting much traffic, so everything – the dabatase, the message broker and the web server – was running on an AWS EC2 t2.micro. It had 2 vCPUs and 1 GB of RAM and costed around $5 a month. Needless to say, it couldn’t handle the traffic from being on the front-page of HN. This was expected. But instead of requests just taking longer or the service being temporarily unavailable, the EC2 instance just went into a failed state and required manual intervention to restore the service. This wasn’t expected. I was hoping that the t2.micro would become unresponsive in the face of overwhelming traffic and would become functional again as the traffic died down. I didn’t expect it to crash and require a manual restart.

What was happening was that my t2.micro instance was running out of CPU credits and was throttling to 5% of the CPU performance, which isn’t enough to run the kernel. Burstable instances provides a baseline CPU performance and has the ability to burst above this baseline when the workload demands it. You accumulate CPU credits when the CPU is running at the baseline level and you use up these credits when you are bursting. I didn’t know that using up all your CPU credits for the instance can prevent the kernel from running. Using a t2.small didn’t solve the issue – I eventually ran out of CPU credits and the instance failed and required a manual intervention. The need to intervene manually meant that if the service goes down in the middle of the night, it stays down until I wake up the next morning.

You can argue that I was using the wrong EC2 instance type for the job and you would be right. I chose a t2.micro because it was the cheapest. The cheapest non-burstable instance I could find was an a1.medium for $18 a month, or $11 a month if I reserve it for a year. For a side project that didn’t have a plan to charge its users (yet), I considered that expensive. I considered moving to a $5 linode, but I was worried I’d run into variants of the same issue. Given the choices, going serverless sounded like a good idea. Each request to my service will be executed in a different lambda function and hence won’t starve for resources, even when there is high traffic. Moreover, I would be paying only for the compute I use. I did some calculations and figured that I can probably stay under the limits of the AWS free tier. It took me around a year to re-write the app to be completely serverless, add some new features and a paid tier, and launch again on HN. This time, the app did not go down. But the post also didn’t make it to the front-page, so I do not know what will happen if it’s subjected to the same amount of traffic.

The serverless stack

I wanted to use python flask during development and deploy each API route as a different lambda function. I used the confusingly named serverless framework to do exactly that. The serverless framework is essentially a wrapper around a cloud provider (AWS in my case) and automates the annoying job of creating an AWS API gateway end-point for each of the API routes in your app. It also has a bunch of plugins to handle things like managing a domain name, using static s3 assets e.t.c.

I had to use dynamoDB. If I had gone with a relational database, I’d again have to decide where to host the database (eg: t2.micro?). Instead of self-hosting RabbitMQ, I decided to use AWS SQS because my usage would fit in the free tier and allows me to easily configure a lambda function to process messages in the queue. If I had self-hosted RabbitMQ I would have had to use something like celery to process messages added to the queue, and that would have been an additional headache.

The good

Costs were low

I was able to keep costs exceptionally low during development. I wanted to have separate test, dev and prod stages. All experimental features are tested on test, and then promoted to dev once they are stable enough. If nothing explodes in dev for a while, the changes get deployed to prod. This would have required 3 EC2 instances running round the clock. Even if I were to use t2.micros, it would have been $15 a month to keep all three running all the time. It costs $0 with my AWS + serverless framework setup. Costs continued to remain low (i.e zero) even after I launched. I currently have 8 active users (including me) and I’m yet to exceed the AWS free-tier.

Serverless framework gives you free monitoring

The serverless framework gives you error reporting for free. Instead of fiddling around with AWS cloudwatch or sentry, I can open up the serverless dashboard and see an overview of the health of the app. I’ve tried setting up something similar using cloudwatch and gave up because of the atrocious UX.

Some default graphs from the serverless dashboard. I can quickly see if my lambda functions are erroring out.

Infrastructure as code

I was forced into using infrastructure as code and that’s a good thing. The serverless framework requires you to write a serverless.yml file that describes the resources your application needs. For TweetScreenr, this included the dynamoDB table names, global secondary indexes, the SQS queue name, the domain to deploy to e.t.c. When you deploy using serverless deploy (this is another nice thing – I can deploy to prod with a single command), the serverless framework will create these resources for you. This made things like setting up an additional deployment stage (eg: a test instance) or deploying to a different AWS account really easy.

Serverless framework had excellent customer support. When something did not work (which was often. More on that later), I could ask for help using the chat in the dashboard and someone from customer support would help me resolve my issue. This happened twice. Oh, I’m a free user. I do not want to promote serverless framework but their great customer support definitely deserves a mention. If I was treated so well as a free user, I imagine that they are treating their paid customers better.

The ugly

Despite the fantastic savings in cost, the niceties of infrastructure as code and the convenience of single-command deployments, my development experience with serverless framework + AWS was frustrating. Most of these are shortcomings of the broader serverless paradigm and are not specific to either AWS or the serverless framework. But a lot of them were just AWS being a pain in the ass and a few of them were problems introduced by the serverless framework..

Lambda functions are slow

My lambda functions take 2 seconds to start up (cold start). According to this post, the main culprit seems to be the botocore library. Another quirk is that AWS lambda couples memory and cpu power, and the cpu power scales linearly from 128MB to 1.7Gb. At 1.7GB AWS allocates your function an entire cpu core. The lambda functions on TweetScreenr’s test and dev instances are configured to use 128mb of memory and they are slooooow. In the production instance of TweetScreenr I configured the functions to use 512mb and this made the cold starts considerably faster, even though none of the underlying lambda functions use more than 100mb of RAM during execution.

Lambda functions can’t get too large

There is also a limit to how large your lambda function can get. I wrote my web app as a regular python flask app and thus used a sane amount of libraries/dependencies. I quickly ran into the 50mb limit for lambda packages. Fortunately there’s a serverless framework plugin for lambda layers. I was able to put all my dependencies into a layer to keep the deployment size under 50mb.

DynamoDB limitations

Among all the things that are wrong with serverless, this was the most infuriating.

DynamoDB has StringSet attribute that can be used to store set of strings. Turns out that you cannot do subset checks with SS. In TweetScreenr, I wanted to check if the set of domains in a tweet is a subset of the set of the domains the user has blocked. This cannot be done. I have to do the equivalent of contains(block_list, x) for each x. This is bad, since I’ll have to retrieve all the tweets from the database (and pay for this retrieval) and apply the filter in python. In postgres, I could have easily done this with postgres arrays and the @> operator (a.k.a the bird operator).

DynamoDB also won’t let you create an index (a GSI) on a bool attribute. I have an is_user attribute that is a boolean, and the idea was to create an index on is_user so that I can quickly get a list of all users by checking whether is_user is True. Nope. No GSIs allowed on bool. I had to make is_user a string attribute to create an index on it.

Also, pagination sucks with DynamoDB. There’s no way to get the total number of items (well, items having certain attributes. Not the overall size of the database) in dynamodb. This is why pagination in TweetScreenr uses simple next and prev buttons instead of displaying the total number of pages.

I know what you are thinking – DynamoDB is not a good fit for my use case. But my use case is to simply pull tweets from Twitter and associate it with a user. No fancy joins required. If DynamoDB (and No-SQL in general) is not a good fit for such a contained use-case, then what is the intended use-case for DynamoDB?

Errors thrown by the serverless framework cli were misleading

Not everything was rosy in the development front either. Mistakes in serverless.yml were hard to debug. For example, I had this (mis-)configured yml:

    handler: src.usermodel.send_digest_for_user
    memorySize: 128
      - sqs:
          arn: !Ref DigestTopicStaging
          topicName: "DigestTopicStaging"

The problem here was that I was passing the reference to a topic, but according to the yml it was expecting an SQS queue.This is the stacktrace I got when I ran serverless deploy:

✖ Stack core-dev failed to deploy (12s)
Environment: linux, node 16.14.0, framework 3.7.2 (local) 3.7.2v (global), plugin 6.1.5, SDK 4.3.2
Credentials: Local, "serverless" profile
Docs:        docs.serverless.com
Support:     forum.serverless.com
Bugs:        github.com/serverless/serverless/issues

TypeError: EventSourceArn.split is not a function
    at /home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/plugins/aws/package/compile/events/sqs.js:71:37
    at /home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/plugins/aws/package/compile/events/sqs.js:72:15
    at Array.forEach (<anonymous>)
    at /home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/plugins/aws/package/compile/events/sqs.js:46:28
    at Array.forEach (<anonymous>)
    at AwsCompileSQSEvents.compileSQSEvents (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/plugins/aws/package/compile/events/sqs.js:36:47)
    at PluginManager.runHooks (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/classes/plugin-manager.js:530:15)
    at async PluginManager.invoke (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/classes/plugin-manager.js:564:9)
    at async PluginManager.spawn (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/classes/plugin-manager.js:585:5)
    at async before:deploy:deploy (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/plugins/deploy.js:40:11)
    at async PluginManager.runHooks (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/classes/plugin-manager.js:530:9)
    at async PluginManager.invoke (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/classes/plugin-manager.js:563:9)
    at async PluginManager.run (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/classes/plugin-manager.js:604:7)
    at async Serverless.run (/home/ec2-user/environment/paperdelivery/node_modules/serverless/lib/serverless.js:174:5)
    at async /home/ec2-user/environment/paperdelivery/node_modules/serverless/scripts/serverless.js:687:9

The error message was utterly unhelpful. I solved this using the good old “stare at the config until it dawns on you” technique. Not recommended.

Serverless framework doesn’t like it if you change things using the AWS console

If I decide to start over and delete the app using serverless remove, it would not work – it complains that the domain name config I’ve associated with an API endpoint must be manually deleted. Fine, I did that. While I was at it, I also manually deleted the API gateways – they were going to be removed by serverless remove anyway. Running serverless remove again now resulted in an error because it could not find the app, because I deleted the API gateways manually. I wish serverless framework would have ignored that and continued to delete the rest of the CloudFormation stack it had created. Since the serverless cli wouldn’t help me, I had to click around the AWS console a bazillion times and delete everything manually. Arghhhhhh.

Something similar happened when I manually deleted a lambda function and tried to deploy again. My expectation was that the serverless framework would see that one lambda end-point is missing and re-create just that. Instead, I got this:

UPDATE_FAILED: PollUnderscoreforUnderscoreuserLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Lambda function core-dev-poll_for_user could not be found" (RequestToken: dcc0e4a3-5627-5d7a-2569-39e25c268ff2, HandlerErrorCode: NotFound)

It really doesn’t like you changing things directly in the AWS console.

Outdated documentation about the serverless framework

I was trying to get the serverless framework to create an SQS queue. This blog post from 2018 explicitly mentions that serverless will not create a queue for you – you have to manually create it using the AWS console and use the ARN in the serverless.yml. That information is likely outdated since this stack overflow answer tells you how to get serverless to create the queue for you. There are more examples of outdated documentation on the serverless website.


Making the app go completely serverless was a painful experience. I don’t want to do that ever again. But serverless makes it so cheap to run your app if you don’t have heavy traffic. I should also stay away from AWS. But again, they are the cheapest. Arghh.

Maybe I should set more realistic expectations on what it costs to host a side project. If I am willing to pay for two (one for the web server and one for the database) a1.medium (or the equivalent non-aws) instances I would be a happy man. That’s $18 a month, or $216 ($132 if I reserve them) a year. That’s not trivial, but that’s affordable. However, I tend to work on multiple side projects. $100+ a year to host each of them is not practical. Let me know in the comments if you have ideas.

Film simulations from scratch using Python

Disclaimer: The post is more about understanding LUTs and HaldCLUTs and writing methods from scratch to apply these LUTs to an image rather than coming up with CLUTs themselves from scratch.


  1. What are film simulations?
  2. CLUTs primer
  3. Simple hand-crafted CLUTs
  4. The identity CLUT
  5. HaldCLUTs
  6. Applying a HaldCLUT
  7. Notes and further reading

There is also an accompanying notebook, in case you want to play around with the CLUTs.

What are film simulations?

Apparently, back in the day, people shot pictures with analog cameras that used film. If you wanted a different “look” to your pictures, you would load a different film stock that gave you the desired look. This is akin to current-day Instagram filters, though more laborious. Some digital camera makers, like Fujifilm, started out as makers of photographic films (and they still make them), and transitioned into making digital cameras. Modern mirrorless cameras from Fujifilm have film simulation presets that digitally mimic the style of a particular film stock. If you are curious, John Peltier has written a good piece on Fujifilm’s film simulations. I was intrigued by how these simulations were achieved and this is a modest attempt at untangling them.

CLUTs primer

A CLUT, or a Color Look Up Table, is the primary way to define a style or film simulation. For each possible RGB color, a CLUT tells you which color to map it to. For example, a CLUT might specify that all green pixels in an image should be yellow:

# map green to yellow
(0, 255, 0) -> (255, 255, 0)

The actual format in which this information is represented can vary. A CLUT can be a .cube file, a HaldCLUT png, or even a pickled numpy array as long as whatever image editing software you use can read it.

In an 8-bit image, each channel (i.e red, green or blue) can take values from 0 to 255. Our CLUT should theoretically have a mapping for every possible color – that’s 256 x 256 x 256 colors. In practice however, CLUTs are way smaller. For example an 8-bit CLUT would divide each channel into ranges of 32 (i.e 256 divided by 8). Since we have 3 channels (red, green and blue), our CLUT can be imagined as a three dimensional cube:

A standard 3D CLUT. Image Credits

To apply a CLUT to the image, each color in the image is assigned to one of the cells in the CLUT cube, and the color of the pixel in the original image is changed to whatever RGB color is in its assigned cell in the CLUT cube. Hence the color (12, 0, 0) would belong to the second cell along the red axis in the top left corner of the cube. This also means that all the shades of red between (8, 0, 0) and (15, 0, 0) will be mapped to the same RGB color. Though that sounds terrible, an 8-bit CLUT usually produces images that are fine to our eyes. Of course we can increase the “quality” of the resulting image by using a more precise (eg: 12-bit) CLUT.

Simple hand-crafted CLUTs

Before we craft CLUTs and start applying them to images, we need a test image. For the sake of simplicity, we conjure up a little red square:

from PIL import Image
img = Image.new('RGB', (60, 60), color='red')

We will now create a simple CLUT that would map red pixels to green pixels and apply it to our little red square. We know that our CLUT should be a cube, and each “cell” in the cube should map to a color. If we create a 2-bit CLUT, it will have the shape (2, 2, 2, 3). Remember that our CLUT is a cube with each side of “length” 2, and that each “cell” in the cube should hold an RGB color – hence the 3 in the last dimension.

import numpy as np
clut = np.zeros((2, 2, 2, 3))
transformed_img = apply_3d_clut(clut, img, clut_size=2)

We haven’t yet implemented the “apply_3d_clut()” method. This method will have to look at every pixel in the image and figure out the corresponding mapped pixel from the CLUT. The logic is roughly as follows:

  1. For each pixel in the image:
    1. get the (r, g, b) values for the pixel
    2. Assign the (r, g, b) values to a “cell” in our CLUT
    3. Replace the pixel in the original with the color in the assigned CLUT “cell”

We should be careful with step 2 above – since we have a 2-bit CLUT, we want color values up to 127 to be mapped to the first cell and we want values 127 and above to be mapped to the second cell.

from tqdm import tqdm
def apply_3d_clut(clut, img, clut_size=2):
        clut must have the shape (size, size, size, num_channels)
    num_rows, num_cols = img.size
    filtered_img = np.copy(np.asarray(img))
    scale = (clut_size - 1) / 255
    img = np.asarray(img)
    for row in tqdm(range(num_rows)):
        for col in range(num_cols):
            r, g, b = img[col, row]
            # (clut_r, clut_g, clut_b) together represents a "cell" in the CLUT
            # Notice that we rely on round() to map the values to "cells" in the CLUT
            clut_r, clut_g, clut_b = round(r * scale), round(g * scale), round(b * scale)
            # copy over the color in the CLUT to the new image
            filtered_img[col, row] = clut[clut_r, clut_g, clut_b]
    filtered_img = Image.fromarray(filtered_img.astype('uint8'), 'RGB')
    return filtered_img

Once you implement the above method and apply the CLUT to our image, you will be treated with a very underwhelming little black box:

Our CLUT was all zeros, and unsurprisingly, the red pixels in our little red square was mapped to black when the CLUT was applied. Let us now manipulate the CLUT to map red to green:

clut[1, 0, 0] = np.array([0, 255, 0])
transformed_img = apply_3d_clut(clut, img, clut_size=2)

Fantastic, that worked! Time to apply our CLUT to a real image:

This unassuming Ape truck from Rome filled with garbage is going to be our guinea pig. Our “apply_3d_clut()” method loops over the image pixel by pixel and is extremely slow – we’ll fix that soon enough.
import urllib.request
truck = Image.open(urllib.request.urlopen("https://i.imgur.com/ahpSmLP.jpg"))
green_truck = apply_3d_clut(clut, truck, clut_size=2)

That’s a bit too green. We can see that the reds in the original image did get replaced by green pixels, but since we initialized our CLUT to all zeroes, all the other colors in the image was replaced with black pixels. We need a CLUT that would map all the reds to greens while leaving all the other colors alone.

Before we do that, let us vectorize our “apply_3d_lut()” method to make it much faster:

def fast_apply_3d_clut(clut, clut_size, img):
        clut must have the shape (size, size, size, num_channels)
    num_rows, num_cols = img.size
    filtered_img = np.copy(np.asarray(img))
    scale = (clut_size - 1) / 255
    img = np.asarray(img)
    clut_r = np.rint(img[:, :, 0] * scale).astype(int)
    clut_g = np.rint(img[:, :, 1] * scale).astype(int)
    clut_b = np.rint(img[:, :, 2] * scale).astype(int)
    filtered_img = clut[clut_r, clut_g, clut_b]
    filtered_img = Image.fromarray(filtered_img.astype('uint8'), 'RGB')
    return filtered_img

The identity CLUT

An identity CLUT, when applied, produces an image identical to the source image. In other words, the identity CLUT maps each color in the source image to the same color. The identity CLUT is a perfect base for us to build upon – we can change parts of the identity CLUT to manipulate certain colors while other colors in the image are left unchanged.

def create_identity(size):
    clut = np.zeros((size, size, size, 3))
    scale = 255 / (size - 1)
    for b in range(size):
        for g in range(size):
            for r in range(size):
                clut[r, g, b, 0] = r * scale
                clut[r, g, b, 1] = g * scale
                clut[r, g, b, 2] = b * scale
    return clut 

Let us generate a 2-bit identity CLUT and see how applying it affects our image

two_bit_identity_clut = create_identity(2)
identity_truck = fast_apply_3d_clut(two_bit_identity_clut, 2, truck)
identity_truck = Image.fromarray(identity_truck.astype('uint8'), 'RGB')

The two-bit truck

That’s in the same ballpark as the original image, but clearly there’s a lot wrong there. The problem is our 2-bit CLUT – we had a palette of only 8 colors (2 * 2 * 2) to choose from. Let us try again, but this time with a 12-bit CLUT:

twelve_bit_identity_clut = create_identity(12)
identity_truck = fast_apply_3d_clut(twelve_bit_identity_clut, 12, truck)
identity_truck = Image.fromarray(identity_truck.astype('uint8'), 'RGB')
Left – the original image, right – the image after applying the 12-bit identity CLUT

That’s much better. In fact, I can see no discernible differences between the images. Wunderbar!

Let us try mapping the reds to the greens again. Our goal is to map all pixels that are sufficiently red to green. What’s “sufficiently red”? For our purposes, all pixels that end up being mapped to the reddish corner of the CLUT cube deserve to be green.

green_clut = create_identity(12)
green_clut[5:, :4, :4] = np.array([0, 255, 0])
green_truck = fast_apply_3d_clut(green_clut, 12, truck)

That’s comically bad. Of course, we got what we asked for – some reddish parts of the image did get mapped to a bright ugly green. Let us restore our faith in CLUTs by attempting a slightly less drastic and potentially pleasing effect – make all pixels slightly more green:

green_clut = create_identity(12)
green_clut[:, :, :, 1] += 20
green_truck = fast_apply_3d_clut(green_clut, 12, truck)
Left – the original image, Right – the image with all pixels shifted more to green

Slightly less catastrophic. But we didn’t need CLUTs for this – we could have simply looped through all the pixels and manually added a constant value to the green channel. Theoretically, we can get more pleasing effects by fancier manipulation of the CLUT – instead of adding a constant value, maybe add a higher value to the reds and a lower value to the whites? You can probably see where this is going – coming up with good CLUTs (at least programmatically) is not trivial.

What do we do now? Let’s get us some professionally created CLUTs.


We are going to apply the “Fuji Velvia 50” CLUT that is bundled with RawTherapee to our truck image. These CLUTs are distributed as HaldCLUT png files, and we will spend a few minutes understanding the format before writing a method to apply a HaldCLUT to the truck. But why HaldCLUTs?

  1. HaldCLUTs are high-fidelity. Our 12-bit identity CLUT was good enough to reproduce the image. Each HaldCLUT bundled with RawTherapee is equivalent to a 144-bit 3d CLUT. Yes, that’s effectively CLUT of shape (144, 144, 144, 3).
  2. However, the real benefit of using HaldCLUTs is the file size. Adobe’s .cube CLUT format is essentially a plain text file with RGB values. Since each character in the text file takes up a byte, a 144-bit CLUT in .cube takes up around 32MB on disk. The equivalent HaldCLUT png image file is around a megabyte. But png images are two-dimensional. How can we encode three-dimensional data using a two-dimensional image? We’ll see.

Let’s look at an identity HaldCLUT:

The identity HaldCLUT, generated using convert hald:12 -depth 8 -colorspace sRGB hald_12.png

Pretty pretty colors. You’d have noticed that the image seems to have been divided into little cells. Let’s zoom in on the cell on the top-left corner:

We notice a few things – the pixel on the top-left is definitely black – so it represents the first “bucket” or “cell” in a 3D clut and pure blacks (i.e rgb(0, 0, 0)) are going to be mapped to the color present in this bucket . Of course the pixel at (0, 0, 0) in the above image is black because we are dealing with an identity CLUT here – a different CLUT could have mapped the index (0, 0, 0) to gray. The confusing part here is to figure out how to index into the HaldCLUT – let’s say we have a bright red pixel with the value (200, 0, 0) in our source image. If we were dealing with a normal 144-bit 3D CLUT, we would know that a red value of 200 will belong to the index 200 * 144 / 255 = 133 (approximately), and we would replace the color of this pixel with whatever was at CLUT[113][0][0]. But we are not dealing with a 3D CLUT here – we are dealing with a 2-D image, while we have to index into this image as if it was a 3D CLUT.

The entire identity HaldCLUT image in our example has the shape (1728, 1728), and each of those little cells that you see has the shape (12, 144), and there are 144 such cells in a single column of the image (i.e vertically). The HaldCLUT, as you can see, has 12 columns. Hence we have 1728 cells in the entire HaldCLUT, each cell having the shape (12, 144). This is how we index into a HaldCLUT file:

(if the description doesn’t make much sense, it is followed by a code snippet that’s hopefully clearer)

  1. Within each cell, the red index always changes from left to right. In our top-left cell, it changes from 0 to 143. This is the case in each row within each cell – the red index is always 0 in the first column of a cell, and 1 in the second column and so on. Since each cell has 12 rows, in each of these rows the red index changes from 0 to 143.
  2. The green index is constant in each row within a cell, and increments by 1 across cells horizontally, and wraps around. So the pixel at position (143, 0) in the HaldCLUT image represents the index (143, 0, 0), while the pixel at position (144, 0) represents the index (0, 1, 0) and so on. The pixel at position (1, 0) would represent the index (0, 12, 0).
  3. The blue channel is constant everywhere within a cell, and increments by 1 across cells vertically. So the pixel at position (11, 0) will represent the index (0, 131, 0) while the pixel at (12, 0) will represent the index (0, 0, 1). Notice how both the red-index and green-index was reset to 0 when moved down the HaldCLUT image by an entire cell.
The top-left corner extracted from the full identity HaldCLUT. Only the first 3 rows and two columns are shown here (the third column is clipped). Note that the annotations represent the index into the 3d CLUT that pixel represents if the HaldCLUT was instead a normal 3D CLUT. Each cell has the shape (12, 144). When there are two lines in the diagram seemingly coming out from the same pixel, I am trying to show how the represented index changes between adjacent pixels at a cell boundary.

Inspecting the identity HaldCLUT in python reveals the same info:

identity = Image.open("identity.png")
identity = np.asarray(identity)
print("identity HaldCLUT has size: {}".format(identity.shape))
size = round(math.pow(identity.shape[0], 1/3))
print("The CLUT size is {}".format(size))
# The CLUT size is 12
print("clut[0,0] is {}".format(identity[0, 0]))
# clut[0,0] is [0 0 0]
print("clut[0, 100] is {}".format(identity[0, 100]))
# clut[0, 100] is [179   0   0]
print("clut[0, 143] is {}".format(identity[0, 143]))
# We've reached the end of the first row in the first cell
# clut[0, 143] is [255   0   0]
print("clut[0, 144] is {}".format(identity[0, 144]))
# The red channel resets, the green channel increments by 1
# clut[0, 144] is [0 1 0]
print("clut[0, 248] is {}".format(identity[0, 248]))
# clut[0, 248] is [186   1   0]
# Notice how the value in the green channel did not increase. This is normal - we have 256 possible values and only 144 "slots" to keep them. The identity CLUT occasionally skips a 
print("clut[0, 432] is {}".format(identity[0, 432]))
# clut[0, 432] is [0 5 0]
# ^ The red got reset, the CLUT skipped more values in the green channel and now maps to 5. This is the peculiarity of this CLUT. A different HaldCLUT (not the identity one) might have had a different value for this green channel step.
print("clut[0, 1727] is {}".format(identity[0, 1727]))
# clut[0, 1727] is [255  19   0]
# This is the last pixel in the first row of the entire image
print("clut[1, 0] is {}".format(identity[1, 0]))
# clut[1, 0] is [ 0 21  0]
# Notice how the value in the green channel "wrapped around" from the previous row
print("clut[1, 144] is {}".format(identity[1, 144]))
# Exercise for the reader: see if you can guess the output correctly 🙂
print("clut[12 0] is {}".format(identity[12, 0]))
print("clut[12 143] is {}".format(identity[12, 143]))
print("clut[12 144] is {}".format(identity[12, 144]))

Applying a HaldCLUT

Now that we’ve understood how a 3D CLUT is sorta encoded in a HaldCLUT png, let’s go ahead and write a method to apply a HaldCLUT to an image:

import math 
def apply_hald_clut(hald_img, img):
    hald_w, hald_h = hald_img.size
    clut_size = int(round(math.pow(hald_w, 1/3)))
    # We square the clut_size because a 12-bit HaldCLUT has the same amount of information as a 144-bit 3D CLUT
    scale = (clut_size * clut_size - 1) / 255
    # Convert the PIL image to numpy array
    img = np.asarray(img)
    # We are reshaping to (144 * 144 * 144, 3) - it helps with indexing
    hald_img = np.asarray(hald_img).reshape(clut_size ** 6, 3)
    # Figure out the 3D CLUT indexes corresponding to the pixels in our image
    clut_r = np.rint(img[:, :, 0] * scale).astype(int)
    clut_g = np.rint(img[:, :, 1] * scale).astype(int)
    clut_b = np.rint(img[:, :, 2] * scale).astype(int)
    filtered_image = np.zeros((img.shape))
    # Convert the 3D CLUT indexes into indexes for our HaldCLUT numpy array and copy over the colors to the new image
    filtered_image[:, :] = hald_img[clut_r + clut_size ** 2 * clut_g + clut_size ** 4 * clut_b]
    filtered_image = Image.fromarray(filtered_image.astype('uint8'), 'RGB')
    return filtered_image

Let’s test our method by applying the identity HaldCLUT to our truck – we should get a visually unchanged image back:

identity_hald_clut = Image.open(urllib.request.urlopen("https://i.imgur.com/qg6Is0w.png"))
identity_truck = apply_hald_clut(identity_hald_clut, truck)

Let us finally apply the “Fuji Velvia 50” CLUT to our truck:

velvia_hald_clut = Image.open(urllib.request.urlopen("https://i.imgur.com/31UrdAg.png"))
velvia_truck = apply_hald_clut(velvia_hald_clut, truck)
Left – the original image, Right – image after apply the “Fuji Velvia 50” HaldCLUT

That worked! You can download more HaldCLUTs from the RawTherapee page. The monochrome (i.e black and white) HaldCLUTs won’t work straight-away because our apply_hald_clut() method expects a hald image with 3 channels (ie reg, green and blue), while the monochrome HaldCLUT images have only 1 channel (the grey value). It won’t be difficult at all to change our method to support monochrome HaldCLUTs – I leave that as an exercise to the reader 😉

Notes and further reading

Remember how we saw that a 2-bit identity CLUT gave us poor results while a 12-bit one almost reproduced our image? That is not necessarily true. Image editing softwares can interpolate between the missing values. For example, this is how PIL apply a 3d CLUT with linear interpolation.

The “Fuji Velvia 50” HaldCLUT that we use is an approximation of Fujifilm’s proprietary velvia film simulation (probably) by Pat Davis

If you want to create your own HaldCLUT, the easiest way would be to open up the identity HaldCLUT png file in an image editing software (e.t.c RawTherapee, Darktable, Adobe Lightroom) and apply global edits to it. For example, if you change the saturation and contrast values to the HaldCLUT png using the image editor, and apply this modified HaldCLUT png (using our python script, or a different image editor – doesn’t matter how) to a different image, the resulting image would have more contrast and saturation. Neat right?

VimCharm: Approximating PyCharm on vim

Disclaimer: All I’ve done is write a few config files.

This is for you if:

  1. Your python IDE of choice is PyCharm
  2. You wish you had a command-line replacement for PyCharm on all the places you ssh into
  3. You wish you had access to at least some of the niceties of PyCharm when editing a one-off script, without having to create/import it into a new project.
  4. You are somewhat familiar with vim and can comfortably edit a single file on vim. You also know what a .vimrc is
VimCharm, featuring capreolus


PyCharm has worked wonderfully well for me, and the only time where I have to use something else is when I ssh into a server to put together a quick script. That something else tends to be vim, and this is an attempt to get vim as close to PyCharm as possible – especially the shortcuts so that I can work on vim the way I work on PyCharm (well, almost). The end result is still a far cry from PyCharm, but it makes navigating a codebase over ssh significantly less painful (at least for me)

But PyCharm can work over ssh

Yeah, but I don’t use PyCharm for one-off scripts. Besides, it pleases me to know that if I can ssh into the server (from an ipad, a phone, or a washing machine), I have an (approximate) IDE I can work on.

List of working approximations

  • Sorta kinda uses the same colorscheme as PyCharm
  • Toggles a project navigation sidebar (NERDTree) using alt + 1 . Approximates PyCharm’s Ctrl+1
  • Comment/uncomment multiple lines using Ctrl+/, just like PyCharm
  • Autocomplete
  • Navigate to the definition of a method/variable using Ctrl + Left click or Ctrl + b, just like PyCharm
  • Jump to the previous location using Alt + - . Approximates PyCharm’s Ctrl + Alt + Left Arrow
  • Fuzzy search for files using Ctrl + o. Approximates PyCharm’s double shift press
  • Search the entire code base using Alt + f. Approximates PyCharm’s Ctrl + Shift + f
  • Edits made in the search results window are reflected on to the underlying file, just like PyCharm
  • Syntax and linter errors show up as you type, just like PyCharm
  • If you are editing files that are part of a git repository, there are indicators on the gutter to show added, modified and subtracted lines, just like PyCharm
  • Pressing F12 brings up a terminal. Approximates PyCharm’s Alt + F12
  • Code folding using the minus (i.e -) key. Approximates PyCharm’s Ctrl+- and Ctrl + +
  • Automatic file saves, just like PyCharm
  • Rename methods/variables and automatically fix imports e.t.c across files, just like PyCharm

Why not just use python-mode?

I simply could not figure out the shortcuts that python-mode used. I thought it would be easier and more flexible if I install and configure the plugins myself.


  • vim 8
  • A Python virtualenv (or a conda environment). There’s some pip install involved, though this is optional
  • Patience


Go here for the .vimrc

Let’s start from a blank .vimrc

If you need a project-specific .vimrc, see this. If not, everything goes into your ~/.vimrc

Let us begin by being able to see line numbers and some sort of syntax highlighting everywhere. Put these on your .vimrc:

" Some basic stuff
set number
set showmatch
syntax on
imap jj <Esc>

The set showmatch is for highlighting matching parentheses – that’s useful. The last line maps double pressing the j key in insert mode to <Esc> – no more reaching for that far away Escape key using your pinky!

NERDTree for the sidebar

We’ll start our plugin-hoarding with NERDTree. With Vim 8, we can simply copy over a plugin directory to a certain place and Vim would just “pick it up” – there’s no need to use a plugin manager for achieving VimCharm. Create the necessary directory structure and clone NERDTree:

mkdir ~/.vim/pack/vimcharm/start/ -p
git clone https://github.com/preservim/nerdtree.git ~/.vim/pack/vimcharm/start/nerdtree

Open some file (any file) on vim and type :NERDTreeToggle – you should see the sidebar. Executing the same command again would close the NERDTree. PyCharm by default opens/closes the sidebar using Ctrl + 1 . However, the terminal (and consequently Vim) cannot differentiate between 1 and Ctrl + 1, so we’ll map this to Alt + 1 instead. Before we do that, we need to determine what characters are sent by the terminal when we press the key combo. Simply run cat without any arguments, and press Alt + 1. You should see something like this:

You would also see that 1 and Ctrl+1 produces the same character on cat – as far as I know, there’s no way around this.

We need to map the character sequence for Alt + 1 to :NERDTreeToggle on our .vimrc:

set <A-1>=^[1
nmap <A-1> :NERDTreeToggle<CR>

Take care not to simply copy paste the character sequence on to your .vimrc! That won’t work. You should open your .vimrc on Vim, go to insert mode, and press Ctrl+v – this would put a caret under your cursor – now press Alt + 1 and it should fill in the necessary characters there. Restart vim, and Alt+1 should now open and close NERDTree.

Making it look like PyCharm

Gruvbox looks like the default PyCharm theme, kinda, so let’s get that:

git clone https://github.com/morhetz/gruvbox.git ~/.vim/pack/vimcharm/start/gruvbox

According to the installation page, we need to add this to our .vimrc:

autocmd vimenter * ++nested colorscheme gruvbox

Commenting and Uncommenting lines using Ctrl + /

We are going to use NERDCommenter for this. Clone it into the right directory, just as before:

 git clone https://github.com/preservim/nerdcommenter ~/.vim/pack/vimcharm/start/nerdcommenter

Ctrl+/ cannot be directly mapped on your .vimrc. So just like before, we insert the correct escaped character sequence into our .vimrc by going into insert mode, pressing Ctrl+v, and then pressing the desired keycombo (Ctrl + / in this case).

" The part after "=" in the below line should be inserted using Ctrl+v while in insert mode and then pressing Ctrl+/
set <F13>=^_
noremap <F13> :call NERDComment(0,"toggle")<CR>

" So that NERDCommenter can automatically decide how to comment a particular filetype
filetype plugin on

We are telling Vim to map the character sequences that Ctrl + / produces to the F13 key, which probably does not exist on your keyboard, and then we map F13 to the appropriate command to toggle comments. Restart Vim, open (or create) a python file and try Ctrl + / while in normal mode – it should comment/uncomment the line.


Autocomplete on Vim does not feel as “fluid” as on PyCharm – for example, I haven’t managed to get it to work on imports – but it is still besser als nichts. Get jedi-vim:

 git clone https://github.com/davidhalter/jedi-vim ~/.vim/pack/vimcharm/start/jedi-vim

Restart vim, and Ctrl + space should already be giving you autocomplete suggestions. IMHO, jedi-vim displays too much information (what even is that bar thing that comes up on top?) – all I wanted was a simple autocomplete prompt. Put this on your .vimrc to Make Autocomplete Great Again:

let jedi#show_call_signatures = 0
let jedi#documentation_command = ""
autocmd FileType python setlocal completeopt-=preview

EDIT: I also installed supertab along with jedi:

 git clone https://github.com/ervandew/supertab ~/.vim/pack/vimcharm/start/supertab

Go-to definition using Ctrl+click

This is something that jedi-vim already does – all we need are some .vimrc entries:

set mouse=a
let g:jedi#goto_command = "<C-LeftMouse>"
map <C-b> <C-LeftMouse>

The above lines enable mouse support on Vim, sets Ctrl + left click as the combination for jedi’s goto command, and then recursively maps Ctrl+b (which is also what PyCharm uses by default) to Ctrl + left click as a keyboard-friendly alternative.

I also prefer the goto command to open a new tab when navigating to a different file. Here’s how to enable that, along with Shift+j and Shift+k to move between tabs:

let g:jedi#use_tabs_not_buffers = 1
nnoremap J :tabp<CR>
nnoremap K :tabn<CR>

Jump to previous location using Alt + minus

If you end up navigating multiple tabs away using Ctrl+b, you can press Ctrl+o repeatedly to jump back to your original position. Press Ctrl+i to go in the other direction. These would come in handy if you have to quickly gg to the beginning of the file to add an import – you can then press Ctrl+o to go back to the line you were editing. I believe in PyCharm the default mappings for this are Ctrl + Alt + Left arrow and Ctrl + Alt + right arrow respectively. I remapped these to Alt + - and Alt + Shift + - (that would be in fact Alt + _ ):

set <A-->=^[-
noremap <A--> <C-O>
set <A-_>=^[_
noremap <A-_> <C-I>

Remember that copy-pasting these won’t work and you will have to enter insert mode and press Ctrl+v and then Alt + -

Fuzzy file search

Fuzzy file search is what PyCharm does when you press the Shift key twice.

Download CtrlP:

git clone https://github.com/kien/ctrlp.vim.git ~/.vim/pack/vimcharm/start/ctrlp

By default pressing Ctrl+p should bring up the search prompt. We can’t map this to double shift (as in PyCharm) since vim can’t recognize Shift key presses (unless it’s combined with a printable character). I decided to map this to Ctrl + o instead (“o” for open), though this is not any better than the default setting. On your .vimrc:

let g:ctrlp_map='<C-O>'
let g:ctrlp_cmd='CtrlPMixed'

The second line above specifies that the search should be over files, buffers, tags e.t.c – you may omit it if you do not want buffers to show up on your search. ctrl+t on a search result will open it in a new tab.

Search everywhere and replace

One of the most useful features in PyCharm is the “search in project” dialog that Ctrl + Shift + f brings up. For example, if I delete/rename a hard-coded string literal, this is the dialog that I would bring up to look for all occurrences of that string literal so that I can rename/delete all of them – right from the search window.

Instead of using the built-in vimgrep or making an external call to the ubiquitous grep, we are going to use ack because it excludes annoying things like gitignore and binaries from the search results by default.

  1. Somehow get ack on your target system
  2. git clone ack.vim to ~/.vim/pack/vimcharm/start/ack.vim

We are going to map Alt + F to a python-only smart-cased search with ack. Add these to your .vimrc:

set <A-F>=^[f
nnoremap <A-F> :Ack! -S --python 

Remember to Ctr+v and then press Alt+F to get those escaped character sequence right. Also, there is an extra space after the --python. Without it, the search term (eg: “foo”) that you type after pressing Alt+F would end up being “–pythonfoo”.

Restart vim and press Alt+f in a python file, enter your search term, and press enter. The results will be shown in a quick-fix window. Move your cursor to a search result and press enter to jump to that location. Press t to open that location in a new tab. Either of these would shift the focus to the editor. Press ctrl + w twice to shift focus back to the quick-fix window.

I usually use /<pattern> to search within the file, but sometimes it’s useful to do a slightly fancier search. I’ve wired Ctrl+f (the regular PyCharm find-in-this-file) to do a search within the open file:

nnoremap <C-F> :Ack!  %<Left><Left>

By default, you cannot make any changes to the contents of the quick-fix window. In Pycharm, the search results are editable and the changes are reflected on the underlying file. We can pull this off using the quickfix-reflector:

git clone https://github.com/stefandtw/quickfix-reflector.vim.git ~/.vim/pack/vimcharm/start/quickfix-reflector.vim

That’s it! Now your edits on the search results should be reflected on the underlying files.

Spot syntax and linter errors as you type

The most annoying thing about writing Python on Vim, at least for me, was that the silly syntax errors I make won’t be discovered until I actually try to run my script – PyCharm usually catches these as you type. Let’s set this up on vim using ALE:

git clone https://github.com/dense-analysis/ale.git ~/.vim/pack/vimcharm/start/ale

You should also have a linter insalled. I use pylint, and a quick pip install pylint does the trick. Restart vim and open a python file, and it should already be linting it as you type. Since ALE works asynchronously, there will be a slight delay (around a second) between you making a mistake and it being flagged on Vim – but in my opinion this is much better than a synchronous linting which freezes Vim, which is why I chose ALE instead of syntastic. However, the default ALE + Pylint combo is too whiny for my taste – I don’t want warnings about how I’m not writing a docstring for every single method; I have this on my .pylintrc:

disable=trailing-whitespace,missing-function-docstring,missing-module-docstring,no-else-return,miss    ing-class-docstring,invalid-name

The above is far from how I would like linter to be configured, but it serves as an initial config. I also do not care for highlighting the offending word in a line – all I want is a non-invasive indication in the gutter. On your .vimrc:

let g:ale_set_highlights = 0

Show lines modified after the previous commit

Put vim-gutter at ~/.vim/pack/vimcharm/start/vim-gutter and restart vim. If you edit a file in a repo (or set up git on your current folder with git init), the modified lines would be marked in the gutter. By default it takes 4 seconds for the appropriate mark to appear on your gutter – let us reduce it by putting this line in the .vimrc:

set updatetime=100

The end result is rather unflattering. ALE and git-gutter does not work well together – git-gutter’s modification marks are drawn over by the linter warnings, and in some-cases ALE ends up marking the wrong line with a warning. This thread suggests that there’s probably a way to get them to work the way I want, but I haven’t invested much time here.

Have a terminal handy

In Vim :term will open a terminal in a split window. Mapping this to F12 is trivial, but we want to hide this terminal (instead of killing it) and bring it back again on pressing F12. I could not get the “hide terminal on F12” part working, but I did figure out how to bring up a hidden terminal if it exists (or create a new terminal if it doesn’t) on pressing F12. Before we write a script for it, let’s go through the motions manually:

  1. Open Vim
  2. Type :term to open a terminal in a split window
  3. Type something on the terminal
  4. Press Ctrl+w and then type :hide to hide our terminal window
  5. To show our hidden terminal, type :sbuffer /bin/bash. This would open in a split window a buffer that has “/bin/bash” in its name. If you use something other than bash, you will have to change this string accordingly.

Here’s a LoadTerminal() Vim script I wrote that would bring up an existing bash buffer if it exists, or create a new one if it doesn’t:

function! LoadTerminal()
    let bnr = bufnr('!/bin/bash')
    if bnr > 0 
        :sbuffer !/bin/bash

Save it as load_terminal.vim at ~/.vim/pack/vimcharm/start and add the following lines to your .vimrc:

source ~/.vim/pack/vimcharm/start/load_terminal.vim
map <F12> :call LoadTerminal()<CR>

The annoying part here is that we can’t map key presses on the terminal window – so you’ll have to press Ctrl + w and type :hide to hide the terminal. Do let me know if you find a way to map this to a keystroke.

Code folding

When you deal with large files, code folding (those tiny “-” signs that you click on PyCharm to collapse an entire class/method) is a godsend. Fortunately vim supports code folding right out of the box and all we need is this on our .vimrc:

set foldmethod=indent
set foldlevelstart=99
nnoremap - za
map _ zM
map + zR 

According to our mappings above, there are no folds (i.e every code block is “open”) when we open a file (this is what foldlevelstart specifies). Shift + - (i.e Shift and the minus key) will collapse all blocks, and Shift + + will open all blocks. Use the minus key (i.e -) to toggle collapsing a single fold. You might also want to check out this answer for a quick overview of what’s supported.

Auto save

PyCharm saves the file as you type, sparing you from the hassle of having to press Ctrl+S across multiple tabs. We can get Vim to do this with vim-auto-save. Clone the repo to ~/.vim/pack/vimcharm/start/vim-autosave and add these to your .vimrc to enable auto-save:

let g:auto_save = 1                                                                                
let g:auto_save_events = ["InsertLeave", "TextChanged"] 

A word of caution before we proceed – auto-saving can get quite annoying if enabled globally. I use project-specific vimrcs and use auto-save along with git – so if I accidentally auto-save something that I shouldn’t have, a git diff is all I need to see what went wrong.

Refactor across files

Jedi-vim can do simple renaming, but I wanted to something more powerful. Enter ropevim. You need to pip install rope, ropemode and ropevim. I have a miniconda environment set up, but you can install the packages to your global scope if you want to. We just need one file from the ropevim repo:

wget -O ~/.vim/pack/vimcharm/start/python_ropevim.vim https://raw.githubusercontent.com/python-rope/ropevim/master/ftplugin/python_ropevim.vim

Now let’s source it in our .vimrc:

 source ~/.vim/pack/vimcharm/start/python_ropevim.vim

Now add these to your .vimrc:

nnoremap <C-z> :RopeUndo<CR>                                                                       
set <A-z>=^[z                                                                                      
map <A-z> :RopeRedo<CR>                                                                            
map <F6> :RopeRename<CR>

We have mapped F6 to the rename operation, and Ctrl+z and Alt + z to undo and redo respectively. Remember not to copy paste the mapping for Alt + z, and press Ctrl+v and then the desired keycombo to enter it in your .vimrc.

Restart Vim, open a python file, and try to rename a variable using F6. You will get prompts to create a new ropevim project – press ‘y’ to create one locally, and then proceed to apply the rename. If you get an import error for ropevim when you start Vim, it’s probably because Vim uses the system python (which is probably a different version than the python in your virtualenv) and you pip-installed rope, ropemode and ropevim to a virtualenv. An alternative would be to do conda install -c conda-forge vim on your anaconda/miniconda env so that the Vim in your env will use the local python (and hence your installed pip packages) instead of the system one.

Final thoughts

If anything this exercise has made be better appreciate the work that the Jetbrains devs have put into their IDEs – all I wanted was a working subset of PyCharm’s basic features and what I got was a rather modest approximation. Do let me know (open an issue on Github?) if you managed to get any closer to PyCharm than this.

Yet another kindle vs paper books post

TL;DR: Buy a kindle already. Reading multiple books at a time is surprisingly productive. 
I now read while I eat. There's a list at the end comparing the amount of reading I got
done before and after I switched to a Kindle

I love smelling books. I also like stacking my books on a table, or on a shelf, so that I can look at them from time to time and be pleased with myself. The stacks also double as cheap room decor – books make the room more me. Then there is the added social benefit of being able to show off to anyone who cares to visit that I have read Thoreau and DDIA*.

* Only half-way through. It has been a year since I purchased the (physical) book. Sigh.

Despite all this, despite arguing with my friends that books are more than just the sum of its parts (late realization: a book has only one “part” that matters – the text) and that smelling a book and then flipping through it is a huge part of the “experience”, I switched to a kindle.

I feel ya, fellow book smellers.
Image credits: I got this from a Facebook photo comment :shrugs:

Before you brand me as a traitor and proclaim me unworthy of all the paper books that I have ever smelled, let me assure you that I did not succumb to the dark side easily. I borrowed my dad’s kindle paperwhite and tried it out for an entire month. Then I went out and bought myself a kindle.

The anti-library argument

The number of books I have left partially read has skyrocketed after I switched to the Kindle. And this is a good thing! I have always been a one-book-at-a-time man – I used to carry around the book I was currently reading everywhere, and I would promptly pick up another book after I was done reading the current one. Fast forward to the Kindle era, I find myself reading multiple books at the same time. I had imagined that this would be counter-productive. I’m so happy that I was wrong. The ability to switch to a book on a whim has let me read more than usual since I am reading what I feel like reading now, instead of trying to finish a book that I happened to pick up a week ago out of curiosity. My (unintentional) reading pattern until a few days ago looked like this: Deep Work by Cal Newport during the day, when I find it easy to focus, The Fountainhead for reading on the bed, and The Prince (40% complete) and The rise and fall of the Third Reich (17% complete – this one is a tome) whenever I feel like it. I can confidently say that I would never have made any progress on the last two books if I had stuck to the one book at a time policy that paper books unintentionally forced me to adopt. I would have given up and moved on to the next shiny thing 3 chapters into a history book.

I might never complete reading some of the books that I have on my kindle (looking at you, Third Reich), but that is not the point. In his book Black Swan, Nassim Nicholas Taleb introduces the concept of an anti-library:

The writer Umberto Eco belongs to that small class of scholars who are encyclopedic, insightful, and nondull. He is the owner of a large personal library (containing thirty thousand books), and separates visitors into two categories: those who react with “Wow! Signore professore dottore Eco, what a library you have. How many of these books have you read?” and the others—a very small minority—who get the point is that a private library is not an ego-boosting appendage but a research tool. The library should contain as much of what you do not know as your financial means allow you to put there. You will accumulate more knowledge and more books as you grow older, and the growing number of unread books on the shelves will look at you menacingly. Indeed, the more you know, the larger the rows of unread books. Let us call this collection of unread books an antilibrary.

Replace “unread books” with “partially read books” and you can immediately see how switching to the Kindle has benefitted my anti-library.

The “I now read more” argument

I have been reading on a kindle for about 5 months now, and I do not see myself going back. If there ever was a single compelling advantage that a kindle gave me over paper books, this is it: I read more on a kindle. Much, much more.

I now read while I wait, while I eat, and while I poop. Because the device pleasantly fits into my palm, I can now read while I’m having dinner instead of watching something on YouTube. You may think that this is not a big deal – but for me, it makes all the difference. Unlike finding time to read, finding time to eat is something that I must do in the interest of self-preservation. Coupling eating with reading is a win-win.

But can’t you just read on your laptop/phone while you eat?

Even if I gloss over the possibility of food on my keyboard, a laptop on the dinner table is just outright inconvenient. Reading on the phone might work. I really do not have a solid reason (apart from the LED screen) as to why I could not bring myself to read on my phone regularly.

I am going to deliberately avoid discussing all the other nice things about using an e-reader. I do find myself taking a lot of notes while I read – something I never used to do with paper books since I couldn’t be bothered to carry around a pen. It is also useful to highlight interesting anecdotes/quotes in a book and then later see them in a compact list. But IMHO these are fringe benefits.

Some raw data

Pre-kindle. List of books I read from September 2017- January 2019 (16 months), in no particular order. This includes both physical books and the few books that I had read on my phone :

  1. God of small things, Arundhati Roy (on my phone)
  2. Animal Farm, George Orwell (small book)
  3. Ministry of utmost happiness, Arundhati Roy
  4. Designing Data-Intensive Applications, Martin Kleppman (physical book, still reading)
  5. The God Delusion, Richard Dawkins
  6. Walden, Henry David Thoreau
  7. The old man and the sea, Ernest Hemingway (on my phone, small book)
  8. Catch 22, Joseph Heller
  9. The White Tiger, Aravind Adiga
  10. Meditations, Marcus Aurelius (on my phone, read a few pages here and there)
  11. Blink, Malcolm Gladwell (Read around half of it)

The post-kindle list, spanning the duration from February 2019 to May 20, 2019. (4 months):

  1. Crime and Punishment, Fyodor Dostoevsky
  2. 1Q84, Haruki Murakami
  3. Black Swan, Nassim Nicholas Taleb
  4. Antifragile, Nassim Nicholas Taleb
  5. The Fountainhead, Ayn Rand
  6. Stories of Your Life and Others, Ted Chiang
  7. Deep work, Cal Newport (44% complete)
  8. The Prince, Nicholas Macchiavelli (40%)
  9. The Rise and Fall of the Third Reich, William. L. Shirer (17%)
  10. Aatujeevitham, Benyamin (36%)
  11. Flow, Mihaly Csikszentmihalyi (17%. No intention of returning to this book)
  12. The New Evolution Diet, Arthur De Vany (28%, No intention of returning to this book)

Though I am not a “voracious” reader by any stretch of the imagination, you can see that when compared to the pre-kindle rate of 10 books in 16 months, 6 books in 4 months is indeed an improvement. Note that this is considering only the completed books – 17% of “The Rise and Fall of the Third Reich” is as long as some independent books -_-. I admit that the pre-kindle list is likely to be incomplete – I do not remember all the books that I have picked up and left halfway. Nevertheless, the lists should be able to convince you, albeit rather unscientifically, that I read more after I switched to a Kindle

Programming: doing it more vs doing it better

A few years ago, very early into my programming career, I came across a story:

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot – albeit a perfect one – to get an “A”.

Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes – the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

Jeff Atwood’s “Quantity Always Trumps Quality” post, though he himself took the story from somewhere else.

This little story has had a tremendous impact on how I approach software engineering as a craft. I was (and still am) convinced that the best way to get better at software engineering is to write more software. I was careful enough to not take the story too seriously – I have always strived to write readable, maintainable code without bugs. However, deep inside my mind was this idea that one day I would be able to write beautiful code without thinking. It would be as effortless to me as breathing. “Refactoring code” would be something left to the apprentice, not something that I, the master who has churned out enough ceramic pots, would be bothered with. I just have to keep making ceramic pots until I get there.

Three years later, I am still very much the apprentice. Rather than programming effortlessly, I have learned to program more deliberately. I have learned (the hard way) to review my code more thoroughly and to refactor it now rather than later. I get pangs of guilt and disappointment every time my pull request has to go through another round of review. I am frustrated when I deliver a feature two days late. As an engineer I want to, above everything else, churn out (the right) features as fast as possible.

Today, I came across an essay that would let me resign from my perpetual struggle to “get faster” at engineering:

I used to have students who bragged to me about how fast they wrote their papers. I would tell them that the great German novelist Thomas Mann said that a writer is someone for whom writing is more difficult than it is for other people. The best writers write much more slowly than everyone else, and the better they are, the slower they write. James Joyce wrote Ulysses, the greatest novel of the 20th century, at the rate of about a hundred words a day

William Deresiewicz, Solitude and Leadership

I can strongly relate to this – I would often read and re-read something that I wrote and then I would go back and change it, only to repeat the process again. Though comparing my modest penmanship (keymanship?!) to “the best writers” is outright sacrilegious, even I have in the past noticed that the slower I write, the better I write.

The equivalent in software engineering terms would be to (nothing you did not know before, except for maybe the last point):

  1. Put more thought into the design of your systems
  2. Refactor liberally and lavishly
  3. Test thoroughly
  4. Take your sweet time

As I said, nothing you did not know before. Also, this is almost impossible to pull off when you have realistic business objectives to meet.

But James Joyce probably did not write Ulysses with a publisher breathing down his neck saying “We need to ship this before Christmas!”.

So the secret sauce that makes good code great and the average Joe the next 10x programmer might be this – diligence exercised over a long time.

How does this affect me? Disillusionment. Writing more software does not automatically make you a better programmer. You need the secret sauce, whatever that might be.

Announcing matchertools 0.1.0

Matchertools is my “hello world” project in rust, and I have been chipping away at it slowly and erratically for the past couple of months. You can now find my humble crate here. The crate exposes an API that implements the Gale-Shapley algorithm for the stable marriage problem. Read the wiki. No really, read the linked Wikipedia page. Lloyd Shapley and Alvin Roth won a Nobel prize for this in 2012. Spoiler alert – unlike what the name indicates, the algorithm has little to do with marriages.

This project is so nascent that it is easier for me to list what it does not have:

  1. No documentation
  2. No examples
  3. Shaky integration tests
  4. No code style whatsoever. I haven’t subjected the repo to rustfmt yet (gasp!)
  5. Duct-tape code.
  6. Not nearly enough code comments.


I had recently adopted a new “philosophy” in life:

Discipline will take you farther than motivation alone ever will

Definitely not me, and more a catch-phrase than philosophy

Most of my side projects do not make it even this far. I go “all-in” for the first couple of days and then my enthusiasm runs out and the project is abandoned before it reaches any meaningful milestone.

But I consciously rate limited myself this time. I had only one aim – work on matchertools every day. I did not really care about the amount of time I spent on the project every day, as long as I made some progress. This also meant that some days I would just read a chapter from the wonderful rust book and that would be it. However, I could not stick to even this plan despite the rather lax constraints – life got in the way. So my aim soon degenerated into “work on matchertools semi-regularly, whenever I can, but be (semi) regular about it“. Thus in two months, I taught myself enough rust to shabbily implement a well-known algorithm. Sarcasm very much intended.

Though I was (am) horrified at the painfully slow pace of the project, the “be slow and semi-regular but keep at it” approach did bear results:

  1. I learned a little rust. I am in love with the language. The documentation is superb!
  2. I produced something, which is far better than my side-project-output in the past 18 months – nothing.

Besides, I have realized that much of what happens around me is unplanned and unpredictable to a larger degree than I had thought. I am currently working on revamping the way I plan things and the way I react when my plans inevitably fail. A little Nassim Nicholas Taleb seems to help, but more on that later.

Lessons from learning to play the violin


Learning to play the violin introduced me to western classical music, amateur orchestras, and deliberate practice. Even though I will never seriously pursue music, it was well worth my time.

Some background

I have been taking violin lessons as an adult beginner from 2016 December till now. I have stopped my lessons temporarily since I will be moving to a different city soon. As of January 2019, I am at Suzuki book 3. The decision to pursue violin as an adult might have been influenced by the Carnatic violin lessons I took when I was eleven years old [see sunk cost fallacy].

Also, partly owing to my limited knowledge, I am going to collectively refer to baroque, renaissance and classical music as just “classical music”.

1. Classical music is cool!

I would often come home from a practice session and look up the music we learned that day on Youtube. Though it started as an exercise to get more familiar with the music, I found myself listening to music more actively rather than just letting it play “in the background”. This simple act of being more attentive to what I listen to helped a lot in letting me appreciate classical music.

I was never much interested in the classical genre – most of the pieces I had encountered earlier were simply too long for my short attention span. The lack of an obvious, simple, repeating “chorus” in the genre was something I found hard to come to terms with. However, listening attentively led to the realization that the sophistication and the “cleverness” in the music is something that I could enjoy. My first “aha!” moment was when I stumbled upon the Canon in D. I could see how nuanced the composition was (to my untrained ear), and how each of the violinists seemed to be playing something entirely different yet similar. This was brilliant.

Then I discovered Antonio Vivaldi. I was blown away.

Then there are compositions like the Moonlight” sonata, which I did not quite like the first time I heard it and now I cannot imagine how I could have not loved it all this while. There is clearly a method to the madness.

My favorite rendition of the Canon in D

2. It is better to progress slowly, but surely.

The vibrato is a technique that every budding violinist hopes to master one day. Six to seven months ago, my vibrato was barely audible – I had to strain my ears to recognize it. Even though I am still a long long way from a respectable vibrato, today I can do some vibrato. A shitty vibrato feels much better than no vibrato.

I did not have to practice particularly hard or long to achieve this. I learn the violin for leisure and is in accordance rather leisurely when it comes to practice. I am happy that though I do not play daily (not a good thing), the 40 minutes of practice I put in 3-4 times a week actually let me (slowly) progress in my lessons. This was new for me. I did not have to work hard to progress – I just had to work somewhat consistently. If I had applied this principle to other things in my life, such as contributing to an open source project or going to the gym, I would have had today a much more braggable resume and much less belly fat.

3. Short, deliberate practice is much better than long hours of unfocused practice

When learning new music, my teacher often tells me that once you learn to play the hardest part the rest becomes very easy. The developer in me resonates with this idea – there is no point in optimizing the rest of your code unless and until you address the bottlenecks. The bottleneck, in my violin lessons, is often fast sections of a composition or parts where I am required to use a new finger position. I would often try to avoid putting in the work and won’t practice the difficult parts separately, partly because playing just the difficult parts is just boring. It is much more pleasurable to attempt the music as a whole and enjoy playing at least a part of the composition, instead of tackling just the difficult parts and consequently sound like a cat being tortured. Inevitably, a few days later, I would realize that I am no closer to playing the music successfully because the difficult parts are holding me back. To make progress, I have always had to prioritize learning the difficult portions.

Some parallels that I can draw to software development include learning new programming paradigms or tackling problems outside your usual domain of expertise. I have recently started reading this wonderful book on mathematics even though I have covered most of the topics as part of my CS bachelors degree. Writing code for the exercises at the end of each chapter is sure to get me out of my comfort zone, and using rust to attempt those exercises will make things more interesting.

4. Use social commitments to your advantage

I performed on a stage for the very first time on October 29, 2018. Even though I played the relatively easier second violin part, the pieces that my teacher chose for the orchestra were beyond my skill level. I had four months to “get my shit together” and “man up” for the big day. Horror-struck by the idea of embarrassing myself in front of a crowd, I started pouring extra time into my practice sessions. Vivaldi’s Summer was a particular pain in the ass – it was simply too fast for me. Eventually, we stopped following the Suzuki books in my personal classes and focused only on being able to play the second violin part of Summer by October.

When the big day came, I was not even nearly ready for that performance. I played a lot of wrong notes and to make things worse, my music stand’s hinge broke and I had to try and read from a stand in the next row. I felt terrible at the end of the day. When I talked to my teacher about how disappointed I was with myself, this was his response:

It does not matter. Do you really think that I do not make mistakes? The final performance was not at all significant compared to what you learned in the months of preparation leading to it.

Raja Singh, The creative school of math and music, New Delhi

The final performance was just an excuse to get the students to punch above their weight class. I must say that it worked – I would not have put in the extra time and effort in the absence of a social commitment. The orchestra also taught me how to follow a conductor, and I could not help but chuckle when I realized that the conductor is just a glorified metronome. Something similar happened when I committed to writing something for my employer’s engineering blog. We were trying to create a brand around the culture we strove to build in the engineering team, and I did not want to do a sloppy job. While I usually invest only a couple of hours into a blog post, this particular one took an entire weekend and went through multiple iterations. The result was head and shoulders above anything I had written till date. Social commitments FTW!

Us performing Mozart’s Symphony No. 25. I’m the tall-ish guy at center-right last row who seems to be barely playing. I need to use more bow *sigh*.

5. It is okay to not like something

My opinion before my introduction to classical music:

Country/Acoustic/Pop > Rock > Hip Hop > Metal > Classical

What I thought my opinion would be after (a mere) 2 years of violin lessons:

Classical music > everything-else > Metal

Unfortunately, such PC master race > console peasantry type comparisons are useless in music. For example, I do not get why people love Chopin. I mean yeah this sounds nice, and I would very much like to claim that I listen to Chopin and thus validate my “superior” taste in music. But the truth is, I like Tarzan and Jane much more than I like Frédéric Chopin. To each his own.

Squad takes the Joel test

As originally published in Squad Engineering

Disclaimer: Though this post sounds like fancy startup propaganda, it is not. I still stand by what I’ve written, even though I no longer work at Squad.

Considering that the Joel test dates back to the turn of the century, a time when Pentium III was state of the art and Linux was still obscure, it has aged quite gracefully. It is no more the golden standard against which development teams are rated, but it still is surprisingly relevant (for the most part).

At Squad we strive to build an engineering culture of doing more with less, and having a super smooth kick-ass development workflow is a necessity, not a luxury. Here we go.

1. Do you use source control?

Yes, git. All our code lives on Github. This one’s a no-brainer.

2. Can you make a build in one step?

One step builds are nice, but no builds are even nicer. In python-land, you don’t build — you deploy. As a developer, all I have to do is commit my changes to the staging branch, and do a fab deploy and voila! I can now test my changes on our staging server to my heart’s content . The whole deploy to staging process takes less than 5 minutes. Deploying to production is just another 5 minutes away, assuming you tested your feature on the staging server thoroughly.

3. Do you make daily builds?

As I said, we don’t really have ‘builds’ and that’s a good thing. What we do have is continuous integration using CircleCI. Unit tests are run automatically every time I commit to the repo and with a Slack integration, the team gets notified whenever a build is completed.

Looks like Nitish is killing it today!

4. Do you have a bug database?

Yes, we track all our features and bugs on Pivotal Tracker. We have git integrations and every commit to the repo is automatically recorded under the relevant story. All discussions relevant to a bug/story happens at a single place. Did I mention we seldom use emails? Yup, that’s right, I can count on my fingers the number of times I had to use email for communicating with my teammates.

5. Do you fix bugs before writing new code?

Depends. Before you go berserk thinking “Why would Squad dooo thaaat?” let me humbly point you to Jeff Atwood’s take on the matter — not all bugs are worth fixing. Since we strive to be as lean as possible, every hour we sink into a bug has to be justified. In fact, our developers ruthlessly confirm the ‘ROI’ first before diving into the code base to hunt down and fix the bug.

That being said, you’ll never see a Squad dev building a feature on top of an existing bug. If thou see-eth the bug, thou fixeth the bug while writing thy feature. Also we don’t believe in titles, so the decision to whether fix a bug or not usually comes down to you and not to a mystical manager two tiers above you.

6. Do you have an up-to-date schedule?

We have ‘solver teams’ that are super committed to solving a focused problem and each solver team has a set of Objectives and Key Results (OKRs) for a quarter. This results in a very transparent schedule, agreed on by the whole team, and meeting or breaking it (if necessary) is always a collective decision and not a directive from up top.

What this means is that if Squad was trying to colonize Mars, every member of the solver team would know exactly when to work on making the landing lights look nice and when to focus on just hurtling the shuttle in the approximate direction of Mars.

From when to fix the ignition switch to saying ‘Hi’ to the alien, the solver team has it all figured out.

But despite our best efforts, sometimes deadlines are missed. We try to shed more weight (no landing lights on the rocket) and invoke ** ‘focus on focus’ to get the most critical components shipped. What happens if we’re still unable to meet the schedule? That’s when we own up that our estimates were incorrect and do it better the next time.

** Focus solely on what is worth focusing on. Thanks, Rishabh

7. Do you have a spec?

Walking on water and developing software from a specification are easy if both are frozen — Edward V. Berard>

Remember what I said about stories in Pivotal Tracker? Our specs too live inside stories. This is the process we follow:

  • Discuss with all stakeholders and draft a requirements doc. This doc will act as the source of truth for what needs to be solved and what needs to be built. This is our spec, and it is frozen as far as this story is concerned
  • Design a solution. What’s the easiest, most elegant way to meet the specs? Write down the design, the tasks involved and estimate the story
  • A teammate reviews your design, and a meaningful discussion about the merits of your design and alternate solutions ensue
  • You and your reviewer agree on a design, and you happily code away

8. Do programmers have quiet working conditions?

Oh this is my favorite part.

We have two really cool ‘silent rooms’ anybody can go to when they feel that the office bullpen is getting too loud. Once inside, you are not allowed to talk (unless you and your buddy have super-human whispering skills) and your phones should be on silent. Not on vibrate, but on silent. This is the metaphorical cave where programmers disappear into and come out with bags full of features. Silent rooms are our rock and our refuge, the one place where we won’t be disturbed. The rooms are insulated so that if someone revs their bike right outside the window, we wouldn’t know about it. And yes, I’m writing this blog sitting in one of the silent rooms.

One of our two beloved silent rooms

9. Do you use the best tools money can buy?

Yes, developers bring their own device to work. I actually moved my assembled desktop into the office since I was convinced that an underpowered laptop without a discrete GPU won’t be able to ‘handle me’. Whatever makes you productive.

IDEs too are up to you to decide, and my favorite contester is PyCharm. We also have employed a slew of awesome tools like SlackSentry, and Loggly to make sure that our developers are as productive as they can be. Look at our StackShare page for more.

However, this does not mean we have a paid subscription for insert your favorite tool for x here. We don’t give MacBook Pros to our developers (but we do finance interest-free EMIs if they choose to buy one). We just recently got Slack premium. We are moving from Asana to Clickup. We have not maxed out the parallelism of our CircleCI builds. We also don’t swim in money. We only buy things that make us more productive.

10. Do you have testers?

No. The developers are responsible for their stories/features and they are expected to test it well before pushing to production. However, since another pair of eyes is always better, the person who requested the story (usually a Product Manager from the team) would do an ‘Acceptance Testing’ to make sure what you wrote conforms to the (hopefully frozen, rock solid) specs.

11. Do new candidates write code during their interview?

Yes, an insane amount of code. These are the steps that you would go through to get hired as a Product Engineer at Squad:

  • A call from our awesome CTO, Vikas
  • A design round, over the phone
  • A take-home assignment. (During my interview, I spent around 2 days on it and wrote a lot of code)
  • The team at Squad runs your code, and see if it works as intended. Bonus points if you have a readme.md that makes our life easier
  • The team then reviews your code. I want to stress this part. We actually read your code, and it ‘works’ does not mean you pass
  • A meaningful discussion over email about your chosen design and execution
  • You get invited for an on-premise interview. First round is ‘activity extension’ — extend your take-home assignment to add a couple of features. If your methods/modules were too tightly coupled and inflexible, you’d have a hard time here
  • Another design round. You are expected to design the solution to a problem, and write skeleton code to solve it. Given the time constraint, you get bonus points if your code actually works
  • A bug smash round. You are given a code base and your task is to refactor it according to your definition of ‘good code’

Yes, that’s a lot of code and I guarantee you that it will be the best technical interview you’ve ever had!

12. Do you do hallway usability testing?

Remember what I said about our high octane HQ? It is almost impossible to keep a feature under the wraps till the time it’s released. “Oh Kevin it looks cool but I don’t think it is really useful because of x and y” happens a lot and I’m grateful for it. Continuous improvement and constant iteration is part of our DNA. That being said, there has also been instances where I waited too long to show my features, or I committed to production and then asked the stakeholder to go through the feature. Hallway testing is a guideline. Though we strive to follow the best practices and guidelines whenever possible, it is not always possible. As they say, it is easier to ask forgiveness than to ask for permission. Not always, but sometimes.

So we’re finally here — time to count the points. I would claim that we scored a solid 11, but a skeptic would argue that we barely made a 10. You be the judge.

Also there are a lot of questions that the test does not address, like how easy it is for developers to work remotely (very easy, our team is partly remote) and how often do you refactor code (very often). But I’ll defer that to another post.

Image courtesies:

This post was much ‘shittier’ than it is now. Thanks to my wonderful friends at Squad for helping me fix it

Django code review for dummies

After 2 years at an enterprise backup software firm, I finally took the plunge and joined a startup. I love the engineering culture we have here at Squad, and rigorous code reviews are very much a part of it. Since I often found myself repeating the same mistakes again (and again, and again..), I went ahead and wrote a checklist to help me.

Very wise words from Jake the dog

This is not a generic list and is very much tied to me and the mistakes that I made. The list helped me, but your mileage may vary


Read through the code you wrote, and stop and ponder at each line.

The actual (noobie) list

1. No create/update inside a loop

Are you making create() or update() calls in a loop? Have you considered whether they could be replaced with bulk_create() and bulk_update()? See the django-bulk-update package.

2. No unnecessary model attribute fetches

Are you writing post.id where you could have gotten away with post_id? For example:

class Blog(models.Model):
     title = models.CharField(max_length=200)

 class Post(models.Model):
     blog = models.ForeignKey(Blog)
     time = models.TimeField()
     author = models.CharField(max_length=100

Let’s say you want to know the id of the blog to which a particular post belongs to. One way is to do:

 post = # some how get a reference to a Post object
 print post.blog.id

But the better way is to do:

 print post.blog_id

What’s the difference? In the first case, we are asking Django to fetch the id attribute from the post’s blog entry. As you would probably know, Post and Blog are separate tables in the database. So when you ask for post.blog.id, Django will query the Blog table to fetch the id that we need. That’s an extra query. However, this is not really necessary because we have the information we need in the Postobject itself. Since we have a foreign key relationship from Post to Blog, django must somehow keep track of which post is related to which blog. Django does this by storing a special blog_id attribute in Post which would store the primary key of its parent Blog. So post.blog_id would give us the information we need, without resulting in an extra query.

3. Docstrings and comments are important

Read through the docstrings and comments. It might seem unimportant to read the docstrings and comments when you have a feature to ship. But a wrong comment/docstring can throw the next developer who reads your code off track, and trust me you don’t want to be that guy.

4. If possible, limit your business logic to the model classes

Be mindful of where you write your business logic. Coming from jquery world, I had a tendency to put my business logic wherever I please. Avoid writing business logic in ModelAdmin classes or ModelForm classes (yikes!) and write them where they belong – Modelclasses. This would ensure:

  • Consistency in the codebase. If it is business logic, there’s only one place it could be.
  • Better tests. If it’s in a Model class, then you know you should write unit tests for it

5. Is it covered by unit tests?

Speaking of unit tests, how do you decide when to write unit tests and when not to? If it’s business logic, it needs unit tests. No buts, no maybes, just write the damn tests. If it is something in your ModelAdmin, then you can afford not to write unit tests for that as long as you don’t do any fancy if..elses there. Test your business logic, not your boilerplate

6. Think how your changes affect the existing data

In some cases, for example, when you introduce a new field, you might have to write a data migration so that existing rows in the table would have a sane value for that field. I made the rookie mistake of happily coding away on my feature with nary a thought about the existing data in the database and regretted it afterward. See here for a primer on Django migrations

7. Use that cache!

Keep an eye out for things that can be cached. Find yourselves fetching ‘top 10 scorers of all time’ from the db everytime the page loads? Cache it! Though this should be fairly obvious, it’s easy to forget about the cache when you are busy writing your feature.

8. Offload non-critical heavy tasks to an async queue

Okay, this one is a little specific to our particular stack. Let’s say you have a feature where your user presses ‘generate cats report’ button and you wade through the entire database to figure out how much of your total traffic involved pictures of grumpy cats. It is probably not a good idea to make your user stare at a loading screen while we crunch gigabytes of cat data. Here’s what you could have done:

  • When the user presses the button, start an async task to calculate grumpy cat traffic volume. We use celery to make this happen.
  • Once you fire the async task, immediately respond to the http request with the message “Looking for grumpy cats in the system, we will let you know when we are done”. Now your user can use his/her time for something more productive
  • Message the user in slack/send an email/display a button on the page when your async task is done.

This will let us offload heavier tasks to spare EC2 instances so that more critical requests/queries do not get slower because of grumpy cats

The grumpy cat programmer

9. Be aware of popular optimizations

Know your python. Use list comprehensions over for loopsizip over zip, et cetera. This comes with time and practice, so don’t sweat it. Oh and don’t forget this:

need_refuel = None
if fuel_level < 0.2:
    need_refuel = True
    need_refuel = False

The above mess can be refactored into:

need_refuel = fuel_level < THRESHOLD or False

Whether this aids or impairs readability is a whole different debate. Things like these are subjective, and it is okay to have opinions.

10. Query only what you need

If you had a model like this:

class Banana(models.Model):
    gorilla = models.CharField()
    tree = models.CharField()	
    jungle = models.CharField()

(God save you if you actually have a class structure like that in production, but it would serve our example well) And you want to do something like this:

banana = Banana.objects.get(id=3)

What you wanted was a banana, but what you got was a gorilla holding the banana with the tree it was sitting on along with the entire fricking jungle (thanks, Joe Armstrong for the quote). Not cool.

What you can do instead is:

banana = Banana.objects.get(id=3).only('id')

Here’s the documentation for only. No more gorillas, just the banana. However, I prefer using .value_list('id', flat=True) over only('id') because the latter might result in extra queries if we carelessly try to use any attribute other than id in our code. Using .values() is very explicit and conveys to the programmer that you only need this particular attribute here. It is also faster.

For the love of bananas, just query only what you need

11. Learn to use Django Debug Tools

Django debug tools is your friend. Lavishly using only() and defer()could bite you back if not careful. If you defer loading attributes that you don’t think you will need, but you end up needing them anyways, that would be an extra DB query. At least in the Django list pages, this would result in the dreaded n+1 query. Let’s say you want to tabulate bananas and gorillas:

idGorilla Details
1Chump, Amazon rainforest
2Rick, Cambodia
3Appu, Kerala

You thought all you need is id and Gorilla, so you did Banana.objects.all().only('id', 'gorilla'), so that we don’t need the tree and the jungle. But 3 months later, you thought it would be a good idea to display where the gorilla came from in your table. So you fire up a custom function in the ModelAdmin to do this:

def gorilla_details(self, obj)
    return '{0} {1}'.format(obj.gorilla, obj.jungle)

And everything worked smooth. But unbeknownst to you, Django is making DB queries in a loop. We had told Django to get only id and gorilla through the only() method. We now need the jungle as well. So whenever we access obj.jungle, Django queries the DB to get the jungle because we specifically told it not to fetch the jungle earlier. So we end up making 10 calls to the db for 10 gorillas (or bananas, whatever). The fix is to include jungle in the only() clause, but more often than not we do not even know that we are making an n+1 query.

Enter django debug tools.

Among many other things, DDT will tell you how many queries were fire to load your page. So if our banana-gorilla table makes 35 queries to the database to load, we know something’s wrong. Always look at what the debug toolbar has to say before you send in that pull request for review

Sorry for the long post. Here’s a potato.

This tiny potato will get you through it

Authentication with React-router 4.x

This article is inspired by the excellent tutorial by Scott Luptowski on how to add authentication to you React app. I attempt to re-invent the wheel again because the original article cites an older version of react-router and the instructions do not work if you are using react-router 4.x. There are a lot of breaking changes when you migrate from 3.x to 4.x, and there is an answer to all the whys here


I’m not a Reactjs ninja or rockstar or paladin or anything of that sort. Just a dude with good intentions who had to spend an entire evening trying to figure out how authentication with react-router 4.x works, when the internet had only tutorials that use 3.x. So take my advice with a pinch of salt – I might not be following the best practices.

The goal

Your glorious new app requires the user to log in before they are allowed to do certain things. For example, if you were building the next Twitter, your users shouldn’t be able to tweet unless they are logged in. The idea here is to put certain URL patterns/pages behind an authentication wall so that if a user visits that page and the user is not logged in, he/she should be redirected to a login page. If the user is already logged in, proceed to show the requested content – and the user will have no idea about the karate chops we did behind the stage. Should the user try to navigate to a page that does not exist, we should show a 404 component as well.

The how-to

The solution is simple enough. Just like Scott explained in the original article, we create a React component that contains the login logic. This component wraps all of the routes that require authenticated users. Our entry point to the app would look something like this:

	<App />

But where did all the routes go? From react-router 4.x, you don’t get to define all your routes in one place. Yep, you read that right. So our Appcomponent will be doing its part in routing:

<pre class="wp-block-syntaxhighlighter-code brush: jscript; notranslate">class App extends Component {

	render() {
		return (
				All the awesomeness in the world converged to a single component.

So what are we doing here? If the url exactly matches /, we render a Homecomponent. For everything else that is a subset of /, we render RootRouteWrapper which will subsequently route our requests. So all the other url patterns (eg: /pizza/pizza/yummy) would go on to render the RootRouteWrapper component. But what’s that Switch component doing there? If we had not enclosed the routes in a Switch, react-router would have rendered all routes that matched the url. So if the user visits your-awesome-app.com, all the routes for / will trigger – both Home and RootRouteWrapper! If your routes are enclosed in Switch, react-router will render only the first match – in our example the Home component.

OK. So now we can show a home page. What does the RootRouteWrappercomponent do again?

<pre class="wp-block-syntaxhighlighter-code brush: jscript; notranslate">class RootRouteWrapper extends Component {
	render() {
		return (
			<div id="RootRouteWrapper">

We define 2 routes here – /login to show the user a login prompt and /tweet to let the user post a tweet. Now /tweet should be accessible only if the user is logged in. EnsureLoggedInContainer is the magic component that will handle the login logic for us. The idea is to configure all routes that need authentication to render the EnsureLoggedInContainer. You can also see that we have defined a route that will render the PageNotFoundcomponent if the URL does not match any configured routes. On to our login logic:

import {Route, Switch, withRouter} from 'react-router-dom';

class EnsureLoggedInContainer extends Component {

		const {dispatch, currentURL, isLoggedIn} = this.props;


	render() {
		const {isLoggedIn} = this.props;

		return (
				<Route path="/tweet" component={Tweet} />


export default withRouter(EnsureLoggedInContainer);

The assumption is that the Tweet component shows the user an input box to type a message. Notice how we have declared a Route for /tweet again inside the EnsureLoggedInContainer. When the user navigates to /tweetRootRouteWrapper renders EnsureLoggedInContainer which in turn renders Tweet. If the user is not logged in, componentDidMount will redirect the user to the login page. Remember that you need to export the class with withRouter for the history to be available in the props. Also, you would need to maintain the state of the application separately – this article assumes that you have laid down the necessary plumbing to pass isLoggedIn as a prop to EnsureLoggedInContainerisLoggedIn should come from your application state – and react-redux seems to be the most popular choice here. How to use react-redux to pass properties to your component is beyond the scope of this article. If you are interested, there’s a really good introduction here

In case you wanted to add another page that displays a tweet – say /tweet/1– that would show the tweet with id 1 in a TweetContainercomponent – you would have to write the necessary routing logic inside the Tweet component. /tweet/:id would automatically require authentication since its parent route – /tweet – renders EnsureLoggedInContainer.


You have to make changes at 2 places to add a new route that needs authentication – in the RootRouteWrapper component and then again in EnsureLoggedInContainer. I wonder if there is a more elegant solution