Jug as nix-for-Python

In this post, I want to show how Jug can be understood as nix for Python pipelines.

What is Jug?

Jug is a framework for Python which enables parallelization, memoization of results, and generally facilitates reproducibility of results.

Consider a very classical problem framework: you want to process a set of files (in a directory called data/) and then summarize the results

from glob import glob

def count(f):
    # Imagine a long running computation
    n = 0
    for _ in open(f):
        n += 1
    return n

def mean(partials):
    final = sum(partials)/len(partials)
    with open('results.txt', 'wt') as out:
        out.write(f'Final result: {final}\n')


inputs = glob('data/*.txt')
partials = [count(f) for f in inputs]
mean(partials)

This works well, but if the count function takes a while (which would not be the case in this example), it would be great to be able to take advantage of multiple processors (or even a computer cluster) as the problem is embarassingly parallel (this is an actual technical term, by the way).

With Jug, the code looks just a bit differently and we get parallelism for free:

from glob import glob
from jug import TaskGenerator

@TaskGenerator
def count(f):
    # Long running computation
    n = 0
    for _ in open(f):
        n += 1
    return n

@TaskGenerator
def mean(partials):
    final = sum(partials)
    with open('results.txt', 'wt') as out:
        out.write(f'Final result: {final}\n')


inputs = glob('data/*.txt')
partials = [count(f) for f in inputs]
mean(partials)

Now, we can use Jug to obtain parallelism, memoization and all the other goodies.

Please see the Jug documentation for more info on how to do this.

What is nix?

Nix is a package management system, similar to those used in Linux distributions or conda.

What makes nix almost unique (Guix shares similar ideas) is that nix attempts perfect reproducibility using hashing tricks. Here’s an example of a nix package:

{ numpy, bottle, pyyaml, redis, six , zlib }:

buildPythonPackage rec {
  pname = "Jug";
  version = "2.0.0";
  buildInputs = [ numpy ];
  propagatedBuildInputs = [
    bottle
    pyyaml
    redis
    six
    zlib
  ];

  src = fetchPypi {
    pname = "Jug";
    version = "2.0.0";
    sha256 = "1am73pis8qrbgmpwrkja2qr0n9an6qha1k1yp87nx6iq28w5h7cv";
  };
}

This is a simplified version of the Jug package itself and (the full thing is in the official repo). Nix language is a bit hard to read in detail. For today, what matters is that this is a package that depends on other packages (numpybottle,…) and is a standard Python package obtained from Pypi (nix has library support for these common use-cases).

The result of building this package is a directory with a name like /nix/store/w8d485y2vrj9wylkd5w4k4gpnf7qh3qk-python3.6-Jug-2.0.0

You may be able to guess that the bit in the middle there w8d485y2vrj9wylkd5w4k4gpnf7qh3qk is a computed hash of some sort. In fact, this is the hash of code to build the package.

If you change the source code for the package or how it is built, then the hash will change. If you change any dependency, then the hash will also change. So, the final result identifies exactly what was used to the get there.

Jug as nix-for-Python pipelines

Above, I did not present the internals of how Jug works, but it is very similar to nix. Let’s unpack the magic a bit

@TaskGenerator
def count(f):
    ...

@TaskGenerator
def mean(partials):
    ...
inputs = glob('data/*.txt')
partials = [count(f) for f in inputs]
mean(partials)

This can be seen as an embedded domain-specific language for specifying the dependency graph:

partials = [Task(count, f)
                for f in inputs]
Task(mean, partials)

Now, Task(count, f) will get repeatedly instantiated with a particular value for f. For example, if the files in the data directory are name 0.txt1.txt,…

From the jug manuscript

Jug works by hashing together count and the values of f to uniquely identify the results of each of these tasks. If you’ve used jug, you will have certainly noticed the appearance of a magic directory jugfile.jugdata with files named such as jugfile.jugdata/37/b4f7f68c489a6cf3e62fdd7536e1a70d0d2b87. This is equivalent to the /nix/store/w8d485y2vrj9wylkd5w4k4gpnf7qh3qk-python3.6-Jug-2.0.0 path above: it uniquely identifies the result of some computational process so that, if anything changes, the path will change.

Like nix, it works recursively, so that Task(mean, partials), which expands to Task(mean, [Task(count, "0.txt"), Task(count, "1.txt"), Task(count, "2.txt")]) (assuming 3 files, called 0.txt,…) has a hash value that depends on the hash values of all the dependencies.

So, despite the completely different origins and implementations, in the end, Jug and nix share many of the same conceptual models to achieve something very similar: reproducible computations.

Big Data Biology Lab Software Tool Commitments

Cross-posting from our group website.

Preamble. We produced two types of code artefacts: (i) code that is supportive of results in a results-driven paper and (ii) software tools intended for widespread use.

For an example of the first type, see the Code Ocean capsule that is linked to (Coelho et al., 2018). The main goal of this type of code release is to serve as an Extended Methods section to the paper. Hopefully, it will be useful for the small minority of readers of the paper who really want to dig into the methods or build upon the results, but the work aims at biological results.

This document focuses on the second type of code release: tools that are intended for widespread use. We’ve released a few of these in the last few years: JugNGLess, and Macrel. Here, the whole point is that others use the code. We also use these tools internally, but if nobody else ever adopts the tools, we will have fallen short.

The Six Commitments

  1. Five-year support (from date of publication) If we publish a tool as a paper, then we commit to supporting it for at least five years from the date of publication. We may stop developing new features, but if there are bugs in the released version, we will assume responsibility and fix them. We will also do any minor updates to keep the tool running (for example, if a new Python version breaks something in one of our Python-based tools, we will fix it). Typically, support is provided if you open an issue on the respective Github page and/or post to the respective mailing-list.
  2. Standard, easy to install, packages Right now, this means: we provide conda packages. In the future, if the community moves to another system, we may move too.
  3. High-quality code with continuous integration All our published packages have continuous integration and try to follow best practices in coding.
  4. Complete documentation We provide documentation for the tools, including tutorials, example data, and reference manuals.
  5. Work well, fail well We strive to make our tools not only work well, but also “fail well”: that is, when the user provides erroneous input, we attempt to provide good quality error messages and to never produce bad output (including never producing partial outputs if the process terminates part-way through processing).
  6. Open source, open communication Not only do we provide the released versions of our tools as open source, but all the development is done in the open as well.

Note for group members: This is a commitment from the group and, at the end of the day, the responsibility is Luis’ responsibility. If you leave the group, you don’t have to be responsible for 5 years. If you leave, your responsibility is just the basic responsibility of any author: to be responsive to queries about what was described in the manuscript, but not anything beyond that. What it does mean is that we will not be submitting papers on tools that risk being difficult to maintain. In fact, while the goals above are phrased as outside-focused, they are also internally important so that we can keep working effectively even as group members move on.

Towards typed pipelines

In the beginning, there was the word. The word was a 16 bit variable and it could be used to store any type of information: you could treat it as an integer (signed or unsigned) or as a pointer. There was no typing. Because this was very error-prone, Hungarian notation was invented, which was a system whereby the nameof a variable was enhanced to contain the type that the programmer intended, so that piCount was a pointer to integer named Count.

Nowadays, all languages are typed, either at compile-time (static), at run-time (dynamic), or a mix of the two and Hungarian notation is not used.

There are often big fights on whether static or dynamic typing is best (329 million Google hits for “is static or dynamic typing better”), but typing itself is uncontroversial. In fact, I think most programmers would find the idea of using an untyped programming language absurd.

Yet, there is one domain where untyped programming is still widely used, namely when writing pipelines that combine multiple programmes. In that domain, there is one type, the stream of Bytes. The stream of Bytes is one of the most successful types in programming. In fact everything is a file (which is snappier than everything is a stream of Bytes, even though that is what it means) is often considered one of the defining features of Unix.

Like “B” programmes of old (the untyped programming language that came before “C” introduced all the type safety that is its defining characteristic), most pipelines use Hungarian notation to mark file types. For example, if we have a file called data.fq.gz, we will assume that it is a gzipped FastQ file. There is additionally some limited dynamic typing: if you try to un-gzip a file that is not actually compressed, then gzip is almost certain to detect this and fail with an error message.

However, for any other type of programming, we would never consider this an acceptable level of typing: semi-defined Hungarian notation and occasional dynamic checks is not a combination proposed by any modern programming language. And, thus, the question is: could we have typed pipelines, where the types correspond to file types?

When I think of the pros and cons of this idea, I constantly see that it is simply a rehash of the discussion of types in programming languages.

Advantages of typed pipelines

  1. Better error checking. This is the big argument for types in programming languages: it’s a safety net.
  2. More automated type-transformations. We call this casting in traditional programming languages, to transform one type into another. In pipelines, this could correspond to automatically compressing/decompressing files, for example. Some tools support this already: just pass in a file with the extension .gz and it will be uncompressed on the fly, but not all do and it’s not universal (gzip is widely supported, but not universally so and it is hard to keep track of which tools support bzip2). In regular programming languages, we routinely debate how much casting should be automatic and how much needs to be made explicit.

Disadvantages

  1. It’s more work, at least at the beginning. I still believe that it pays off in the long run is true, but you do require a bit more investment at the onset. In most programming languages, the debate about typing is dead: typing won; However, there is still debate on static vs dynamic typing, which hinges on this idea of more work now for less work later.
  2. False sense of security as there will still be bugs and slight mismatches in file types (e.g., you can imagine a pipeline that takes an image as an input and can accept TIFF files, but actually fails in one of the 1,000s of subtypes of TIFF files out there as TIFF is a complex format). This is another mirror of some debates in programming languages: what if you have a function that accepts integers, but only integers between 0 and 100, is it beneficial to have a complex type system that guarantees this?
  3. Sometimes, it may just be the right thing to do to pass one type as another type and a language could be too restrictive. I can imagine situations where it could make sense to treat a FastQ file as a columnar file to extract a particular element from the headers. Perhaps our language needs an escape hatch from typing. Some languages, such as C provide such escape hatches by allowing you to treat everything as Bytes if you wish to. Others do not (in fact, many do so indirectly by allowing you to call C code for all the dangerous stuff).

I am still not sure what a typed pipeline would look like exactly, but I increasingly see this as the question behind the research programme that we started with NGLess (see manuscript), although I had started thinking about this before, with Jug (https://jug.readthedocs.io/en/latest/).

Update (July 6 2019): After the initial publication of this post, I was pointed to Bioshake, which seems to be a very interesting attempt to go down this path.

Update II (July 10 2019): A few more pointers: janis is a Python-library for workflow definition (an EDSL) which includes types for bioinformatics. Galaxy also includes types for file data.

NG-meta-profiler & NGLess paper published

(I wanted to write about this earlier, but June was a crazy month with manuscript submissions, grant submissions, and a lot of travel.)

The first NGLess manuscript was finally published. See this twitter thread for a summary, which I will not rehash here as I have already written extensively about NGLess and the ideas behind it here.

I also wrote a Nature Microbiology Community blogpost with some of the history behind the tool, emphasizing again how long it takes to get to a robust tool.

Compared to the preprint, the major change is that (in response to reviewer comments) we enhanced the benchmarking section. Profiling metagenomics tools is difficult. If you use an in silico simulation, you need a realistic distribution as a basis. In our case, we used real data to define the species distribution (using mOTUs2, see https://motu-tool.org/). To obtain the simulated metagenomes, we simulated reads from sequenced genomes according to the real data distribution. There are limitations to this approach, in that we still miss a lot of the complexity of real samples, but we cannot simulate the true unknown. In the end, the functional profiles produced by NG-meta-profiler had high correlations with the ground truth (0.88 for the human gut, 0.82 for the marine environment, Spearman correlation).

Additionally, we included a more explicit discussion of the advantages of NGLess for developing tools like NG-meta-profiler, in the Pipeline design with NGLess section.