Film simulations from scratch using Python

Disclaimer: The post is more about understanding LUTs and HaldCLUTs and writing methods from scratch to apply these LUTs to an image rather than coming up with CLUTs themselves from scratch.


  1. What are film simulations?
  2. CLUTs primer
  3. Simple hand-crafted CLUTs
  4. The identity CLUT
  5. HaldCLUTs
  6. Applying a HaldCLUT
  7. Notes and further reading

There is also an accompanying notebook, in case you want to play around with the CLUTs.

What are film simulations?

Apparently, back in the day, people shot pictures with analog cameras that used film. If you wanted a different “look” to your pictures, you would load a different film stock that gave you the desired look. This is akin to current-day Instagram filters, though more laborious. Some digital camera makers, like Fujifilm, started out as makers of photographic films (and they still make them), and transitioned into making digital cameras. Modern mirrorless cameras from Fujifilm have film simulation presets that digitally mimic the style of a particular film stock. If you are curious, John Peltier has written a good piece on Fujifilm’s film simulations. I was intrigued by how these simulations were achieved and this is a modest attempt at untangling them.

CLUTs primer

A CLUT, or a Color Look Up Table, is the primary way to define a style or film simulation. For each possible RGB color, a CLUT tells you which color to map it to. For example, a CLUT might specify that all green pixels in an image should be yellow:

# map green to yellow
(0, 255, 0) -> (255, 255, 0)

The actual format in which this information is represented can vary. A CLUT can be a .cube file, a HaldCLUT png, or even a pickled numpy array as long as whatever image editing software you use can read it.

In an 8-bit image, each channel (i.e red, green or blue) can take values from 0 to 255. Our CLUT should theoretically have a mapping for every possible color – that’s 256 x 256 x 256 colors. In practice however, CLUTs are way smaller. For example an 8-bit CLUT would divide each channel into ranges of 32 (i.e 256 divided by 8). Since we have 3 channels (red, green and blue), our CLUT can be imagined as a three dimensional cube:

A standard 3D CLUT. Image Credits

To apply a CLUT to the image, each color in the image is assigned to one of the cells in the CLUT cube, and the color of the pixel in the original image is changed to whatever RGB color is in its assigned cell in the CLUT cube. Hence the color (12, 0, 0) would belong to the second cell along the red axis in the top left corner of the cube. This also means that all the shades of red between (8, 0, 0) and (15, 0, 0) will be mapped to the same RGB color. Though that sounds terrible, an 8-bit CLUT usually produces images that are fine to our eyes. Of course we can increase the “quality” of the resulting image by using a more precise (eg: 12-bit) CLUT.

Simple hand-crafted CLUTs

Before we craft CLUTs and start applying them to images, we need a test image. For the sake of simplicity, we conjure up a little red square:

from PIL import Image
img ='RGB', (60, 60), color='red')

We will now create a simple CLUT that would map red pixels to green pixels and apply it to our little red square. We know that our CLUT should be a cube, and each “cell” in the cube should map to a color. If we create a 2-bit CLUT, it will have the shape (2, 2, 2, 3). Remember that our CLUT is a cube with each side of “length” 2, and that each “cell” in the cube should hold an RGB color – hence the 3 in the last dimension.

import numpy as np
clut = np.zeros((2, 2, 2, 3))
transformed_img = apply_3d_clut(clut, img, clut_size=2)

We haven’t yet implemented the “apply_3d_clut()” method. This method will have to look at every pixel in the image and figure out the corresponding mapped pixel from the CLUT. The logic is roughly as follows:

  1. For each pixel in the image:
    1. get the (r, g, b) values for the pixel
    2. Assign the (r, g, b) values to a “cell” in our CLUT
    3. Replace the pixel in the original with the color in the assigned CLUT “cell”

We should be careful with step 2 above – since we have a 2-bit CLUT, we want color values up to 127 to be mapped to the first cell and we want values 127 and above to be mapped to the second cell.

from tqdm import tqdm
def apply_3d_clut(clut, img, clut_size=2):
        clut must have the shape (size, size, size, num_channels)
    num_rows, num_cols = img.size
    filtered_img = np.copy(np.asarray(img))
    scale = (clut_size - 1) / 255
    img = np.asarray(img)
    for row in tqdm(range(num_rows)):
        for col in range(num_cols):
            r, g, b = img[col, row]
            # (clut_r, clut_g, clut_b) together represents a "cell" in the CLUT
            # Notice that we rely on round() to map the values to "cells" in the CLUT
            clut_r, clut_g, clut_b = round(r * scale), round(g * scale), round(b * scale)
            # copy over the color in the CLUT to the new image
            filtered_img[col, row] = clut[clut_r, clut_g, clut_b]
    filtered_img = Image.fromarray(filtered_img.astype('uint8'), 'RGB')
    return filtered_img

Once you implement the above method and apply the CLUT to our image, you will be treated with a very underwhelming little black box:

Our CLUT was all zeros, and unsurprisingly, the red pixels in our little red square was mapped to black when the CLUT was applied. Let us now manipulate the CLUT to map red to green:

clut[1, 0, 0] = np.array([0, 255, 0])
transformed_img = apply_3d_clut(clut, img, clut_size=2)

Fantastic, that worked! Time to apply our CLUT to a real image:

This unassuming Ape truck from Rome filled with garbage is going to be our guinea pig. Our “apply_3d_clut()” method loops over the image pixel by pixel and is extremely slow – we’ll fix that soon enough.
import urllib.request
truck =""))
green_truck = apply_3d_clut(clut, truck, clut_size=2)

That’s a bit too green. We can see that the reds in the original image did get replaced by green pixels, but since we initialized our CLUT to all zeroes, all the other colors in the image was replaced with black pixels. We need a CLUT that would map all the reds to greens while leaving all the other colors alone.

Before we do that, let us vectorize our “apply_3d_lut()” method to make it much faster:

def fast_apply_3d_clut(clut, clut_size, img):
        clut must have the shape (size, size, size, num_channels)
    num_rows, num_cols = img.size
    filtered_img = np.copy(np.asarray(img))
    scale = (clut_size - 1) / 255
    img = np.asarray(img)
    clut_r = np.rint(img[:, :, 0] * scale).astype(int)
    clut_g = np.rint(img[:, :, 1] * scale).astype(int)
    clut_b = np.rint(img[:, :, 2] * scale).astype(int)
    filtered_img = clut[clut_r, clut_g, clut_b]
    filtered_img = Image.fromarray(filtered_img.astype('uint8'), 'RGB')
    return filtered_img

The identity CLUT

An identity CLUT, when applied, produces an image identical to the source image. In other words, the identity CLUT maps each color in the source image to the same color. The identity CLUT is a perfect base for us to build upon – we can change parts of the identity CLUT to manipulate certain colors while other colors in the image are left unchanged.

def create_identity(size):
    clut = np.zeros((size, size, size, 3))
    scale = 255 / (size - 1)
    for b in range(size):
        for g in range(size):
            for r in range(size):
                clut[r, g, b, 0] = r * scale
                clut[r, g, b, 1] = g * scale
                clut[r, g, b, 2] = b * scale
    return clut 

Let us generate a 2-bit identity CLUT and see how applying it affects our image

two_bit_identity_clut = create_identity(2)
identity_truck = fast_apply_3d_clut(two_bit_identity_clut, 2, truck)
identity_truck = Image.fromarray(identity_truck.astype('uint8'), 'RGB')

The two-bit truck

That’s in the same ballpark as the original image, but clearly there’s a lot wrong there. The problem is our 2-bit CLUT – we had a palette of only 8 colors (2 * 2 * 2) to choose from. Let us try again, but this time with a 12-bit CLUT:

twelve_bit_identity_clut = create_identity(12)
identity_truck = fast_apply_3d_clut(twelve_bit_identity_clut, 12, truck)
identity_truck = Image.fromarray(identity_truck.astype('uint8'), 'RGB')
Left – the original image, right – the image after applying the 12-bit identity CLUT

That’s much better. In fact, I can see no discernible differences between the images. Wunderbar!

Let us try mapping the reds to the greens again. Our goal is to map all pixels that are sufficiently red to green. What’s “sufficiently red”? For our purposes, all pixels that end up being mapped to the reddish corner of the CLUT cube deserve to be green.

green_clut = create_identity(12)
green_clut[5:, :4, :4] = np.array([0, 255, 0])
green_truck = fast_apply_3d_clut(green_clut, 12, truck)

That’s comically bad. Of course, we got what we asked for – some reddish parts of the image did get mapped to a bright ugly green. Let us restore our faith in CLUTs by attempting a slightly less drastic and potentially pleasing effect – make all pixels slightly more green:

green_clut = create_identity(12)
green_clut[:, :, :, 1] += 20
green_truck = fast_apply_3d_clut(green_clut, 12, truck)
Left – the original image, Right – the image with all pixels shifted more to green

Slightly less catastrophic. But we didn’t need CLUTs for this – we could have simply looped through all the pixels and manually added a constant value to the green channel. Theoretically, we can get more pleasing effects by fancier manipulation of the CLUT – instead of adding a constant value, maybe add a higher value to the reds and a lower value to the whites? You can probably see where this is going – coming up with good CLUTs (at least programmatically) is not trivial.

What do we do now? Let’s get us some professionally created CLUTs.


We are going to apply the “Fuji Velvia 50” CLUT that is bundled with RawTherapee to our truck image. These CLUTs are distributed as HaldCLUT png files, and we will spend a few minutes understanding the format before writing a method to apply a HaldCLUT to the truck. But why HaldCLUTs?

  1. HaldCLUTs are high-fidelity. Our 12-bit identity CLUT was good enough to reproduce the image. Each HaldCLUT bundled with RawTherapee is equivalent to a 144-bit 3d CLUT. Yes, that’s effectively CLUT of shape (144, 144, 144, 3).
  2. However, the real benefit of using HaldCLUTs is the file size. Adobe’s .cube CLUT format is essentially a plain text file with RGB values. Since each character in the text file takes up a byte, a 144-bit CLUT in .cube takes up around 32MB on disk. The equivalent HaldCLUT png image file is around a megabyte. But png images are two-dimensional. How can we encode three-dimensional data using a two-dimensional image? We’ll see.

Let’s look at an identity HaldCLUT:

The identity HaldCLUT, generated using convert hald:12 -depth 8 -colorspace sRGB hald_12.png

Pretty pretty colors. You’d have noticed that the image seems to have been divided into little cells. Let’s zoom in on the cell on the top-left corner:

We notice a few things – the pixel on the top-left is definitely black – so it represents the first “bucket” or “cell” in a 3D clut and pure blacks (i.e rgb(0, 0, 0)) are going to be mapped to the color present in this bucket . Of course the pixel at (0, 0, 0) in the above image is black because we are dealing with an identity CLUT here – a different CLUT could have mapped the index (0, 0, 0) to gray. The confusing part here is to figure out how to index into the HaldCLUT – let’s say we have a bright red pixel with the value (200, 0, 0) in our source image. If we were dealing with a normal 144-bit 3D CLUT, we would know that a red value of 200 will belong to the index 200 * 144 / 255 = 133 (approximately), and we would replace the color of this pixel with whatever was at CLUT[113][0][0]. But we are not dealing with a 3D CLUT here – we are dealing with a 2-D image, while we have to index into this image as if it was a 3D CLUT.

The entire identity HaldCLUT image in our example has the shape (1728, 1728), and each of those little cells that you see has the shape (12, 144), and there are 144 such cells in a single column of the image (i.e vertically). The HaldCLUT, as you can see, has 12 columns. Hence we have 1728 cells in the entire HaldCLUT, each cell having the shape (12, 144). This is how we index into a HaldCLUT file:

(if the description doesn’t make much sense, it is followed by a code snippet that’s hopefully clearer)

  1. Within each cell, the red index always changes from left to right. In our top-left cell, it changes from 0 to 143. This is the case in each row within each cell – the red index is always 0 in the first column of a cell, and 1 in the second column and so on. Since each cell has 12 rows, in each of these rows the red index changes from 0 to 143.
  2. The green index is constant in each row within a cell, and increments by 1 across cells horizontally, and wraps around. So the pixel at position (143, 0) in the HaldCLUT image represents the index (143, 0, 0), while the pixel at position (144, 0) represents the index (0, 1, 0) and so on. The pixel at position (1, 0) would represent the index (0, 12, 0).
  3. The blue channel is constant everywhere within a cell, and increments by 1 across cells vertically. So the pixel at position (11, 0) will represent the index (0, 131, 0) while the pixel at (12, 0) will represent the index (0, 0, 1). Notice how both the red-index and green-index was reset to 0 when moved down the HaldCLUT image by an entire cell.
The top-left corner extracted from the full identity HaldCLUT. Only the first 3 rows and two columns are shown here (the third column is clipped). Note that the annotations represent the index into the 3d CLUT that pixel represents if the HaldCLUT was instead a normal 3D CLUT. Each cell has the shape (12, 144). When there are two lines in the diagram seemingly coming out from the same pixel, I am trying to show how the represented index changes between adjacent pixels at a cell boundary.

Inspecting the identity HaldCLUT in python reveals the same info:

identity ="identity.png")
identity = np.asarray(identity)
print("identity HaldCLUT has size: {}".format(identity.shape))
size = round(math.pow(identity.shape[0], 1/3))
print("The CLUT size is {}".format(size))
# The CLUT size is 12
print("clut[0,0] is {}".format(identity[0, 0]))
# clut[0,0] is [0 0 0]
print("clut[0, 100] is {}".format(identity[0, 100]))
# clut[0, 100] is [179   0   0]
print("clut[0, 143] is {}".format(identity[0, 143]))
# We've reached the end of the first row in the first cell
# clut[0, 143] is [255   0   0]
print("clut[0, 144] is {}".format(identity[0, 144]))
# The red channel resets, the green channel increments by 1
# clut[0, 144] is [0 1 0]
print("clut[0, 248] is {}".format(identity[0, 248]))
# clut[0, 248] is [186   1   0]
# Notice how the value in the green channel did not increase. This is normal - we have 256 possible values and only 144 "slots" to keep them. The identity CLUT occasionally skips a 
print("clut[0, 432] is {}".format(identity[0, 432]))
# clut[0, 432] is [0 5 0]
# ^ The red got reset, the CLUT skipped more values in the green channel and now maps to 5. This is the peculiarity of this CLUT. A different HaldCLUT (not the identity one) might have had a different value for this green channel step.
print("clut[0, 1727] is {}".format(identity[0, 1727]))
# clut[0, 1727] is [255  19   0]
# This is the last pixel in the first row of the entire image
print("clut[1, 0] is {}".format(identity[1, 0]))
# clut[1, 0] is [ 0 21  0]
# Notice how the value in the green channel "wrapped around" from the previous row
print("clut[1, 144] is {}".format(identity[1, 144]))
# Exercise for the reader: see if you can guess the output correctly 🙂
print("clut[12 0] is {}".format(identity[12, 0]))
print("clut[12 143] is {}".format(identity[12, 143]))
print("clut[12 144] is {}".format(identity[12, 144]))

Applying a HaldCLUT

Now that we’ve understood how a 3D CLUT is sorta encoded in a HaldCLUT png, let’s go ahead and write a method to apply a HaldCLUT to an image:

import math 
def apply_hald_clut(hald_img, img):
    hald_w, hald_h = hald_img.size
    clut_size = int(round(math.pow(hald_w, 1/3)))
    # We square the clut_size because a 12-bit HaldCLUT has the same amount of information as a 144-bit 3D CLUT
    scale = (clut_size * clut_size - 1) / 255
    # Convert the PIL image to numpy array
    img = np.asarray(img)
    # We are reshaping to (144 * 144 * 144, 3) - it helps with indexing
    hald_img = np.asarray(hald_img).reshape(clut_size ** 6, 3)
    # Figure out the 3D CLUT indexes corresponding to the pixels in our image
    clut_r = np.rint(img[:, :, 0] * scale).astype(int)
    clut_g = np.rint(img[:, :, 1] * scale).astype(int)
    clut_b = np.rint(img[:, :, 2] * scale).astype(int)
    filtered_image = np.zeros((img.shape))
    # Convert the 3D CLUT indexes into indexes for our HaldCLUT numpy array and copy over the colors to the new image
    filtered_image[:, :] = hald_img[clut_r + clut_size ** 2 * clut_g + clut_size ** 4 * clut_b]
    filtered_image = Image.fromarray(filtered_image.astype('uint8'), 'RGB')
    return filtered_image

Let’s test our method by applying the identity HaldCLUT to our truck – we should get a visually unchanged image back:

identity_hald_clut =""))
identity_truck = apply_hald_clut(identity_hald_clut, truck)

Let us finally apply the “Fuji Velvia 50” CLUT to our truck:

velvia_hald_clut =""))
velvia_truck = apply_hald_clut(velvia_hald_clut, truck)
Left – the original image, Right – image after apply the “Fuji Velvia 50” HaldCLUT

That worked! You can download more HaldCLUTs from the RawTherapee page. The monochrome (i.e black and white) HaldCLUTs won’t work straight-away because our apply_hald_clut() method expects a hald image with 3 channels (ie reg, green and blue), while the monochrome HaldCLUT images have only 1 channel (the grey value). It won’t be difficult at all to change our method to support monochrome HaldCLUTs – I leave that as an exercise to the reader 😉

Notes and further reading

Remember how we saw that a 2-bit identity CLUT gave us poor results while a 12-bit one almost reproduced our image? That is not necessarily true. Image editing softwares can interpolate between the missing values. For example, this is how PIL apply a 3d CLUT with linear interpolation.

The “Fuji Velvia 50” HaldCLUT that we use is an approximation of Fujifilm’s proprietary velvia film simulation (probably) by Pat Davis

If you want to create your own HaldCLUT, the easiest way would be to open up the identity HaldCLUT png file in an image editing software (e.t.c RawTherapee, Darktable, Adobe Lightroom) and apply global edits to it. For example, if you change the saturation and contrast values to the HaldCLUT png using the image editor, and apply this modified HaldCLUT png (using our python script, or a different image editor – doesn’t matter how) to a different image, the resulting image would have more contrast and saturation. Neat right?

Living the ‘Life’!

After a lot of tinkering and pondering over the code, I managed to connect our gui to the core of the program. I even managed to throw in some sort of control using a few buttons and a slider (which determines the size of the board). Now the program works in two modes : One which lets you input an initial configuration and computes the next state on every right click. The user can click on any cell at any time to make it alive. The other mode, dubbed the ‘continous mode’, however lets you modify the board only at the start of the program. Once the user press the right mouse button, the program keeps on displaying subsequent states continuously – which is more fun to watch – and finally settles down in a stable state.

Piecing everything together
Piecing everything together

The buttons (widgets) and the slider was created using the wxpython library. For a tutorial, visit zetcode.

I’ve tried to put together the code as neatly as possible and i have made a lot of comments too. I would like to point out that my version of the program has a little problem with the mouse clicks. It was unnoticeable when i was using a mouse but when i used the touchpad of my laptop, i found that in some cases i had to click a cell twice to make it alive.

Here’s the source code : github link

Use the script to run the program. And make sure you have the pygame and wxpython libraries installed.

Also, if you think that the programs moves all too quickly)or slowly) in the continous mode, change the framerate of the the display to your liking. To do this, open the file and in line 49, change the value inside pygame.time.Clock().tick() to your desired framerate

I *might* convert the python script into an executable for windows and upload it. The conversion can be done using py2exe, but I again *might* have some enthusiasm left to add even more features to my humble program. If I do, then that’s another post 😉

Green dots ahead – playing with pygame

Now that the ‘core’ of the program is up and running (see the previous post : game of life 101) it’s time to add a little glamour to our ‘life’!

So, here’s the recipe :

Dish name : Graphical interface for the game of life

Ingredients required :

1. Python interpreter and some knowledge in python (easily available at a programming site near you)

2. The pygame module for python. Download pygame here

3. A pygame introduction. I recommend inventing with python. Its a great book and serves as an excellent introduction to python AND pygame.

Find the book here. I dare say that ‘Invent with python’ is the only book you’ll ever need to start writing graphical applications in python in no time!

4. A wee bit of patience and a tiny pinch of interest

Now that we have all the ingredients ready, its time to cook some delicious stuff. First of all, let me show you how the ‘game’ will look like after our dish is ready (look at the picture, stupid! :p )

Eternal sunshine of the Spot-full mind
Eternal sunshine of the Spot-full mind

And let me remind you that this is not going to be the graphical implementation of GOL. Why? Because, in this post, I’d just like to see the graphical display up and running. That is, the user must be able to populate the cells with dots, but updating the cells will have to wait until another post. The display window I create now will be completely independent of the text only game of life I had created earlier. But I have made the code flexible enough to be easily modified to suit our purpose at a later time.

So what does the program do exactly? It allows the user to click on a cell and create a green dot right in the middle of the cell. Period.

So, here’s what I did:

1. Divided the screen into cells by drawing horizontal and vertical lines. This is done by first dividing the width (and height) of the window by the no of cells I want. I am assuming that the window will be a square and hence width=height. If the windows width is 400, I’ll divide the width by 5 to get the length of each cell. I draw a vertical line at every 80 pixels on one side and horizontal lines at every 80 pixels on the other side to get the ‘matrix’.

2. Finding the position of mouse in terms of row and column : I identify each cell with its row number and column number. For example, the cell on the top left corner would be the cell (0,0). To get the column on which the mouse pointer now reside, we first divide the X CO-ORDINATE of the mouse position by the length of each cell. Now since both the mouse position and length of cell are of integer type, dividing the x coordinate by the length of each cell gives an integer which says on which column the mouse pointer is.

For example, assume that the mouse pointer is at the position (120,300). The x coordinate is 120. Let the width of window be 400 and let there be 5 cells (as in the picture). So width of each cell is 80. Now to find the column on which the mouse resides, we divide 120 by 80. Since both are integers, we get 120/80 = 1. (If you do not understand this, go here). And that means our mouse is at column 1. Not that this is the SECOND column in the window since column numbering starts from 0. A similar procedure (using height instead of width and the y coordinate) is used to find out on which row the mouse pointer is at.

3. On every left mouse click, the row and column of the cell on which the mouse click occurred is added (appended) to a list called circle_list. If you are not familiar with lists in python, now might be a good time to do so!

4. A function called draw_circles()  draws a green circle (or a dot) on the center of all the cells whose co-ordinates (row and column number) are in the circle_list.

If you need any reference/example regarding the program, feel free to view/try my code. Download it from here. I have thrown some comments here and there inside the file to help any readers who might find themselves completely lost 🙂

In future, we might want the window to pass the information regarding the cells to the ‘core’ of the program to compute the next stage and then display the updated information in the window. To do this, we will be passing on the circle_list to the other classes which will compute the next stage, and then pass back an UPDATED circle_list to the display program. All the display program is to do then is to draw little green circles on the cells in the circle_list (and also remove green circles from the cells that were previously in the list, but not anymore)

As for now, have fun conjuring green dots out of thin air. Until next time 😉

Game of life 101

Manipulating “life” – the text-only way!

I know I said this would be a program that’s graphical. But I certainly think it would be a rather good idea to get GOL (Game of Life) up and running in text only mode first before jumping into mouse clicks and green cells. Modus operandi? Simple. Make a list of what should be implemented, and  how it is intended to be implemented.

“You got to be kidding me! Make a list of what to implement and all that crap? Don’t we all write code on the go? ” – typical reaction when the teacher says “Write an algorithm to print the sum of 2 numbers”. Let me make a few things clear first :

1. This program is  more complicated than say, bubble sort, and definitely many times complicated than finding the sum of two numbers. if you are unfamiliar with classes and constructors in object oriented programming, I suggest you to go through any one introduction to python books.

2. I was one of those wanna be coders who thought algorithms and top-down design approach was completely unnecessary as long as you have the *idea*. That works fine for bubble sort, but when we are writing a program that involves a lot of classes and a few pages of code, its very difficult to keep track of things. More so when we write programs that span multiple files. So write down at least vaguely what and how we are going to get things done.

3. Yes. This program is going to span multiple files. Doing so not only gives me a sense of achievement (How many one file software have you encountered? Hint : None) but adds to the total readability of the project.

4. Programming in python makes life a lot more simple. And I’m still more or less a wannabe coder 😉

Let me begin by describing the Game board. When I talked about grids and cells in my last post, there came to my mind only one obvious method to do it : Matrices. The Game board or the “playing area” will necessarily be a matrix with a number of rows and columns. Each ‘element’ or ‘position’ in that matrix can be either dead or alive. The user will provide an initial configuration by marking alive cells with a value of 1 and dead cells with a value of 0. Then, when the user press a particular key, a compute() function will compute the number of alive neighbours for each cell, and then update each cell based on the number of its live neighbours.

My particular version of text only, dumbed down GOL was done using 3 files : : The core file. Contains a Game class whose primary job is just to sit there and call other classes and functions. : Creates an empty matrix and contains functions to let the user input an initial configuration. : As you might have already guessed, this file contains a ComputeNeighbours class which computes the no of live neighbours of a cell, and a Generate class which updates the cell based on its number of live neighbours

A word on the matrix I used though : I found it a bit hard to swallow but there really are no ‘arrays’ in python. Instead, they have this wonderful stuff called lists which is pretty much a dumbed-down uncomplicated version of arrays. I come from a c background and I spent a considerable amount of time gaping at the screen because I could not for the life of me figure out how to implement two dimensional arrays (our matrix) in python. A friend came to the rescue and this is how i did it :

matrix=[[[0,0]]*no_of_cols for x in range(no_of_rows)]

Looks scary.

a list is represented as [] in python. So [0,0] is a list and its first element is 0 and the second element is 0 as well. let the no_of_cols be 3. so [0,0]*3 will give you


But here, we have written [[0,0]]*no_of_cols. And THAT gives us [[0,0],[0,0],[0,0]]. Tadaaaaaa – A list of lists!!

let no_of_rows be 3 as well

so [[[0,0]]*no_of_cols for x in range(no_of_rows)]     will give us :

[[[0, 0], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]]]

which really is :

[[0,0] ,[0,0], [0,0]]

[[0,0], [0,0], [0,0]]

[[0,0], [0,0], [0,0]]

And that’s our 3×3 two dimensional matrix! But hey, why is each element in the matrix a list as well? Because, I’ll be using a list to represent each cell. The first value in the list says whether the cell is live (1) or dead (0). The next value tells us the number of live neighbours for a particular cell. For eg, if the user inputs an initial configuration such that all the cells are alive, then the value of the first element in the list representing each cell will be 1. Moreover, all the corner cells will have 3 live neighbours, the cells on boundary will have 5 live neighbours and cells NOT on the boundary will have 8 live neighbours. The matrix is passed on to the ComputeNeighbour class. This is how the no of live neighbours are calculated:

1.First, the location of the cell is determined. A cell can either be on a corner, a row boundary (as in a[i][0]), ‘a’ is the matrix), a column boundary (a[0][i]) or not on the edge at all

2. Depending upon the location of the cell, the number of neighbour cells it has also varies. So a different function is called to compute the number of neighbours for each type of cell (that is, whether the cell is on a corner or row_boundary)

3. The number of live neighbours of each cell is written into the second position of the list representing that cell. If the top-left corner  cell is alive and it has 2 live neighbours, then the matrix would be :




and so on.

After we compute the number of live neighbours of each cell and feed this data into the matrix, finding the next state of the matrix is easy. Just apply the rules to make each cell dead(0) or alive(1). As far as my program goes, after the user enters the initial configuration, pressing 1 on the keyboard prints out the next state of the matrix. The user inputs 0 to exit the program.

Although I’ll have to make some serious changes in this program when adding the graphical stuff, doing the text only GOL was necessary since in the graphical version, I’d simply be displaying information present in the matrix on to the screen.

I’ve uploaded my version of text only GOL in a zip file :

Don’t go downloading individual files. Press ctrl+s to download the zip file or go to file->download. Ignore the .pyc files. They are automatically generated when you run the .py files for the first time

Even though everything *seemed* very straightforward, I ran into random errors and my code is more or less a mess – even though readable enough. Execute to run the program.

Up for a game of life, anyone?

No, I’m not talking about the board game. This particular game of life, was thought up by a (respected) guy named Conway way back in the 70s. The rules are pretty simple, and a little time with our friend Google can fish out a lot of info on the subject. But for the uninitiated and the lazy, let me elaborate:

Game of life
Look’s creepy. But trust me, the black dots are supposed to be alive!

Imagine space – not necessarily where the space shuttles go, but ordinary two dimensional, sheet-like space would do just fine. Now imagine a speck in that space. Imagine earth, or imagine little round stones. Now imagine it to be alive (easier if you imagine earth). Now forget your 3rd grade science textbook and imagine little planets AROUND the earth – and no sun. Done?  Now imagine all the planets and the earth to cuddle together! Now the Earth is your ‘live’ cell (or planet) and the little planets around the earth are the ‘neighbors’ of the cell called Earth. Imagine that our Earth and little planets are confined to a 2-d space. That is, the earth and the planets are like the drawings on a piece of paper (I know that’s a little too much elaboration, but I couldn’t somehow help writing that). The ‘neighbours’ of the cell Earth can either be alive (as in Extra terrestrial life) or dead (as in a chunk of rock). Imagine the little planets around the earth to have other different planets near them as well. Now we have a pretty crowded (it depends) sheet of paper with many live and dead cells. Here are the rules:

  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction (imagine planets reproducing :p )

That’s it. The stage for the game of life is ready. What’s all this? It doesn’t sound very ‘interesting’?

Check out the wiki :’s_Game_of_Life

Unimpressed? Boring? Drab?

Have another shot :

A warning though. We are now walking along the shores of that vast sea called cellular automata. Now ‘cellular automata’ was one of those words I used to specifically avoid and classify as ‘complicated un-readable stuff’. And it is complicated. But I read about the game of life on a different context and it sounded simple enough, and thus forms the basis of my next little program!

Here’s an overview of what I HOPE to implement :

1. A grid. Lines running horizontal and vertical divides the grid into cells.

2. an Initial pattern, and a target cell. The goal of the player is to make the target cell alive and maybe keep it alive for 3 turns

3. In each turn, the user can click on a limited no of cells to turn them ‘live’

I guess that’s pretty much it. No complicated Artificial intelligence. No simulation of cool patterns. Just a bunch of boxes to click on and green dots to populate them. Let’s see how far it goes. I guess it shouldn’t be much of a problem, since python and pygame always comes to the rescue 😉

But before going all graphical, let me try to get it up and running in text mode!

Ho Ho MapleBlow!

My very first Pygame project. Its not professional and it certainly isn’t the best way to implement what I was hoping to implement. But as they say, a journey of thousand miles begins with a single step.

This little program here lets the user click around the screen to make a maple leaf move. The basic idea is, the closer you click, the faster the leaf moves. The user will have to do his part by imagining that a wind blows from the point of click to the center of the maple leaf. Also, simply keeping the mouse button pressed won’t do the trick. You have to keep it pressed AND move it a little bit. The problem is, when u keep ur mouse button pressed, my program interprets it as a single mouse press as long as the location remains the same.

Everything done using python. The maple leaf is actualy a png image. Used pygame library for creating chunk of the program. The little menu u see before the program runs was created using wxpython. Oh, and I’ve converted the whole thing into an exe format using py2exe. Download the ready to execute version here :

For the source code :

Note: The source code is in a tarball. Linux users just have to extract it. But windows users will have to find some software that can open tarball files.  But I think new versions of winrar can open that file.

Here’s an overview of what I’ve done : : Created a vector class to handle all those vector additions (for adding velocity of wind to that of leaf) : A Wind class which contains functions that calculate the direction and magnitude of the vector from a point to the centre of the leaf Mainly contains the code for creating platforms (hurdles) that reflect the leaf. Since I found the collision detection a little buggy, I’ve programmed the (main file) to skip over platforming creation Contains everything directly related to the protagonist of our story – the maple leaf. Contains functions to render the picture (which is really a static png image) , move the picture (update the coordinates) and collision detection in case the platforms are present The main file. Calls every other function. Contains the code for creating a basic menu for the program as well

The program is not very complicated, even though it spans multiple files.