How I Use Jug & IPython Notebooks

How I Use Jug & IPython Notebooks

Having just released Jug 1.0 and having recently started using Ipython notebooks for data analysis, I want to describe how I combine these two technologies:

  1. Jug is for heavy computation. This should run in batch mode so that it can take advantage of a computer cluster.
  2. The IPython notebook is for visualization of the results. From the notebook, I will load the results of the jug run and plot them.

I am going to use, as an example, the sort of work I did for classifying images with local features that I did for my Bioinformatics paper last year That code did not use IPython notebook, but I already used a split between heavy computation and plotting[1].

I write a jugfile.py with my heavy computation, in this case, feature computation and classification [2]:

from jug import TaskGenerator
from features import computefeatures
from ml import classification

# computefeatures takes an image path and returns features
computefeatures = TaskGenerator(computefeatures)

# crossvalidation returns a confusion matrix
crossvalidation = TaskGenerator(crossvalidation)

images,labels = load_images() # This loads all the images
features = [computefeatures(im) for im in images]
results = crossvalidation(features, labels)

This way, if I have 1000 images, the computefeatures step can be run in parallel and use many cores.

§

When the computation is finished, I will want to look at the results and display them. For example, graphically plot a confusion matrix.

The only non-obvious trick is how to load the results from jug:

from jug import value, set_jugdir
import jugfile
set_jugdir('jugfile.jugdata')
results = value(jugfile.results)

And, boom!, results is a variable in our notebook with all the data from the computations (if the computation is not finished, an exception will be raised). Let’s unpack this one by one:

from jug import value, set_jugdir

Imports from jug. Nothing special. You are just importing jug in a Python notebook.

import jugfile

Here you import your jugfile.

set_jugdir('jugfile.jugdata')

This is the important step! You need to tell jug where its data is. Here I assumed you used the defaults, otherwise just pass a different argument to this function.

results = value(jugfile.results)

You now use the value function to load the value from disk. Done.

Now, use a second cell to plot:

from matplotlib import pyplot as plt
from matplotlib import cm

plt.imshow(results, interpolation='nearest', cmap=cm.OrRd)

confusion.matrix

§

I find this division of labour to maximize the value of each tool: jug does well for long computations and ensures that the results are consistent while making it easy to use the cluster; ipython is nicer at exploring the results and tweaking the graphical outputs.

[1] I would save the results from jug to a text file and load it from another script.
[2] This is a very simplified form of what the original actually looked like. I started to write this post trying to make it realistic, but the complexity was too much. The plot is from real data, though.
Advertisements

Jug 1.0-release candidate 0

New release of Jug: 1.0 release candidate

I’ve put out a new release of jug, which I’m calling 1.0-rc0. This is a release candidate for version 1.0 and if there are no bugs, in a few days, I’ll just call it 1.0!

There are few changes from the previous version, but this has reached maturity now.

For the future, I want to start developing the hooks interface and use hooks for more functionality.

In the meanwhile, download jug from PyPI, watch the video or read the tutorials or the full documentation.

Jug now outputs metadata on the computation

This week-end, I added new functionality to jug (previous related posts). Jug can now output the final result of a computation including metadata on all the intermediate inputs!

For example:

from jug import TaskGenerator
from jug.io import write_metadata # <---- THIS IS THE NEW STUFF

@TaskGenerator
def double(x):
    return 2*x

x = double(2)
x2 = double(x)write_metadata(x2, 'x2.meta.yaml')

When you execute this script (with jug execute), the write_metadata function will write a YAML description of the computation to the file x.meta.yaml!

This file will look like this:

args:
- args: [2]
  meta: {completed: 'Sat Aug  3 18:31:42 2013', computed: true}
  name: jugfile.double
meta: {completed: 'Sat Aug  3 18:31:42 2013', computed: true}
name: jugfile.double

It tells you that a computation named jugfile.double was computated at 18h31 on Saturday August 3. It also gives the same information recursively for all intermediate results.

§

This is the result of  a few conversations I had at BOSC 2013.

§

This is part of the new release, 0.9.6, which I put out yesterday.

Building Machine Learning Systems with Python

I wrote a book. Well, only in part. Willi Richert and I wrote a book.

It is called Building Machine Learning Systems With Python and is now available from Amazon (or Amazon.co.uk), although it has already been partially available directly from the publisher for a while (in a form where you get chapters as editing is finished).

§

The book is an introduction to using machine learning in Python.

We mostly rely on scikit-learn, which is the most complete package for machine learning in Python. I do prefer my own code for my own projects, but milk is not as complete. It has stuff that scikit-learn does not (and stuff they have, correctly, appropriated).

We try to cover all the major modes in machine learning and, in particular, have:

  1. classification
  2. regression
  3. clustering
  4. dimensionality reduction
  5. topic modeling

and also, towards the end, three more applied chapters:

  1. classification of music
  2. pattern recognition in images
  3. using jug for parallel processing (including in the cloud).

§

The approach is tutorial-like, without much math but lots of code examples.

This should get people started and will be more than enough if the problem is easy (and there are still many easy problems out there). With good features (which are problem-specific, anyway) knowing how to run an SVM will very often be enough.

Lest you fear we are giving people enough just enough knowledge to be dangerous, we stress correct evaluation of the results throughout the book. We warn repeatedly against mixing up your training and testing data. This simple principle is, unfortunately, still often disregarded in scientific publications. [1]

§

There is an aspect that I really enjoyed about this whole process:

Before starting the book, I had already submitted two papers, neither of which is out already (even though, after some revisions, they are in accepted state). In the meanwhile, the book has been written, edited (only a few minor issues are still pending) and people have been able to buy parts of it for a few months now.

I have now a renewed confidence in the choice to stay in science (also because I moved from a place where things are completely absurd to a place where the work very well). But the delay in publications that is common in the life sciences is an emotional drag. In some cases, the bulk of the work was finished a few years before the paper is finally out.

Update (July 26 2013): Amazon is now shipping the book! I changed the wording above to reflect this.

[1] It is rare to see somebody just report training accuracy and claim their algorithm does well. In fact, I have never seen it in a recent paper. However, performing feature selection or parameter tuning on the whole data prior to cross-validating on the selected features with the tuned parameters is pretty common still today (there are other sins of evaluation too: “we used multiple parameters and report the best”). This leads to inflated results all around. One of the problems is that, if you do things correctly in this environment, you risk that reviewers of your work will say “looks great, but so-and-so got better results” because so-and-so tuned on the testing set and seems to have “beaten” you. (Yes, I’ve had this happen, multiple times; but that is a rant for another day.)