Extended Depth of Field In Python using Mahotas

I wanted to implement an extended depth of field algorithm last week for visualisation.

The idea of extended depth of focus is that you have a stack of images, taken at different focal points, and you build a single artificial image so that you get everything in focus. Some simple methods work OK, some very advanced methods work better. We were happy with OK for our setting. Here’s how to do it with mahotas:

Start with standard imports:

import numpy as np
import mahotas as mh

We are going to assume that you have an image object, which has three dimensions: the stack, height, and width:

stack,h,w = image.shape

We use mh.sobel as the measure of infocusness for each pixel [1]:

focus = np.array([mh.sobel(t, just_filter=True) for t in image])

Now, we select the best slice at each pixel location:

best = np.argmax(focus, 0)

So far, very easy. The next part is the hard part. We want to do the following:

r = np.zeros((h,w))-1
for y in xrange(h):
    for x in xrange(w):
        r[y,x] = image[best[y,x], y, x]

But this is very slow (never run nested loops in Python if you can avoid it). We get the same result with a slightly less legible, but faster manipulation [2]:

image = image.reshape((stack,-1)) # image is now (stack, nr_pixels)
image = image.transpose() # image is now (nr_pixels, stack)
r = image[np.arange(len(image)), best.ravel()] # Select the right pixel at each location
r = r.reshape((h,w)) # reshape to get final result

Et voilà!

§

Here is an example, from a stack of plankton images. This is is the maximum intensity projection:

Maximum Intensity Projection

This is the most in-focus slice (using the sobel operator):

Most in-focus slice

And this is the extended depth of field result:

Extended Depth of Field

It is clearly sharper (look at some of the organelles on the top-left: they are a blur in the other two images),at the expense of some possible noise. I actually played around with blurring the image a little bit and it did improve things ever so slightly.

§

Cite the mahotas paper if using this in a scientific publication.

[1] Other methods simply use a different measure here.
[2] I am not 100% convinced that this is the best. After all we create an array of size len(image) just to index. I would be happy to find an alternative.
Advertisements

Using imread to save disk space

Imread recently gained the ability to read&write metadata.

We deal with images around here and they can get very large in terms of disk space. To make things worse, the microscope does not save them in compressed form.

Imread, however, saves in compressed TIFF. So, we needed to (1) open the file and (2) resave it. We also do not want to lose the metadata that comes with the file. in the meanwhile.

This is what I ended up with:

def resave_file(f):
    '''
    resave_file(f)

    Resave a file using imread preserving metadata    Parameters
    ----------
    f : str
        Filename
    '''
    imdata, meta = imread.imread(f, return_metadata=True)
    tf = tempfile.NamedTemporaryFile('w',
                prefix='imread_resave_',
                suffix='.tiff',
                delete=False,
                dir=path.dirname(f))
    tf.close()
    imread.imsave(tf.name, imdata, metadata=meta)
    os.rename(tf.name, f)

On a test directory, disk usage went from 55GB down to 12GB. We use a two-step

Note: This only works with the github version of imread.