Numpy/scipy backwards stability debate (and why freezing versions is not the solution)

This week, a discussion broke out about the stability of the Python scientific ecosystem. It was triggered by a blogpost from Konrad Hinsen, which led to several twitter follow ups.

First of all, let me  say that numpy/scipy are great. I use them and recommend them all the time. I am not disparaging the projects as a whole or the people who work on them. It’s just that I would prefer if they were stabler. Given twitter’s limitations, perhaps this was not as clear as I would like on my own twitter response:

I pointed out that I have been bit by API changes:

All of these are individually minor (and can be worked around), but these are just the issues that I have personally ran into and caused enough problems for me to remember them. The most serious was the mannwhitneyu change, which was a silent change (i.e., the function started returning a different result rather than raising an exception or another type of error).

*

Konrad had pointed out the Linux kernel project as one extreme version of “we never break user code”:

The other extreme is the Silicon-Valley-esque “move fast and break stuff”, which is appropriate for a new project. These are not binary decisions, but two extremes of a continuum. I would hope to see numpy move more towards the “APIs are contracts we respect” side of the spectrum as I think it currently behaves too much like a startup.

Numpy does not use semantic versioning, but if it did almost all its releases would be major releases as they almost always break some code. We’d be at Numpy 14.0 by now. Semantic versioning would allow for a smaller number of “large, possibly-breaking releases” (every few years) instead of a constant stream of minor backwards-incompatible changes. We’d have Numpy 4.2 now, and a list of deprecated features to be removed by 5.0.

Some of the problems that have arisen could have been solved by (1) introducing a new function with the better behaviour, (2) deprecating the old one, (3) waiting a few years and removing the original version (in a major release, for example). This would avoid the most important problem, silent changes.

*

A typical response is to say “well, just use anaconda (or similar) to freeze your dependencies”. Again, let me be clear, I use and recommend anaconda for everything. It’s great. But, in the context of the backwards compatibility problem, I don’t think this recommendation is really thought through as it only solves a fraction of the problem at hand (again, an important fraction but it’s not a magic wand).  (See also this post by Titus Brown).

What does anaconda not solve? It does not solve the problem of the intermediate layer, libraries which use numpy, but are to be used by final users. What is the expectation here? That I release my computer vision code (mahotas) with a note: Only works on Numpy 1.11? What if I want a project that uses both mahotas and scikit-learn, but scikit-learn is for Numpy 1.12 only? Is the conclusion that you cannot mix mahotas and scikit-learn? This would be worse than the Python 2/3 split. A typical project of mine might use >5 different numpy-dependent libraries. What are the chances that they all expect the exact same numpy version?

Right now, the solution I use in my code is “if I know that this numpy function has changed behaviour, either work around it, avoid it, or reimplement it (perhaps by copying and pasting from numpy)”. For example, some functions return views or copies depending on which version of numpy you have. To handle that, just add a “copy()” statement to all of them and now you always have a copy. It’s computationally inefficient, but avoiding even a single bug over a few years probably saves more time in the end.

It also happens all the time that I have an anaconda environment, add a new package and numpy is upgraded/downgraded. Is this to be considered buggy behaviour by anaconda? Anaconda currently does not upgrade everything to Python 3 when you request a package that is not available on Python 2, nor does it downgrade from 3 to 2; why should it treat numpy any differently if there is no guarantee that behaviour is consistent across numpy verions?

Sometimes the code at hand is not even an officially released library, but some code from another project. Let’s say that I have code that takes a metagenomics abundance matrix, does some preprocessing and computes stats and plots. I might have written it originally for a paper a few years back, but now want to do the same analysis on new data. Is the recommendation that I always write from scratch because it’s a new numpy version? What if it’s someone else asking me for the code? Is the recommendation that I ask “are you still on numpy 1.9, because I only really tested it there”. Note that Python’s dynamic nature actually makes this problem worse than in statically checked languages.

What about training materials? As I also wrote on twitter, it’s when teaching Python that I suffer most from Python 2-vs-Python 3 issues. Is the recommendation that training materials clearly state “This is a tutorial for numpy 1.10 only. Please downgrade to that version or search for a more up to date tutorial”? Note that beginners are the ones most likely to struggle with these issues. I can perfectly understand what it means that: “array == None and array != None do element-wise comparison”(from the numpy 1.13 release notes). But if I was just starting out, would I understand it immediately?

Freezing the versions solves some problems, but does not solve the whole issue of backwards compatibility.

Advertisements

How NGLess uses its version declaration

NGLess is my metagenomics tool, which is based on a domain specific language. So, NGLess is both a language and a tool (which implements the language).

Since the beginning, ngless has had a focus on reproducibility and one the small ways in which this was implemented was that ngless requires a version declaration. Every ngless script is required to start with a version declaration:

    ngless "0.5"

This was always intended to enable the language to change while keeping perfect reproducibility of past scripts. Until recently, though, this was just hypothetical.

In October, I taught a course on NGLess and it became clear that one of the minor inconsistencies in the previous version of the language (at the time, version “0.0”) was indeed confusing. In the previous version of the language, the preprocess function modified its arguments. No other function did this.

In version “0.5” (which was released on November 1st), preprocess is now a pure function, so that you must assign its output to a value.

However, and this is where the version declaration comes into play, the newer executable still accepts scripts with the version declaration ngless "0.0". Furthermore, if you declare your script as using ngless 0.0, then the old behaviour is used. Thus, we fixed the language, but nobody needs to update their scripts.

Implementation note (which shouldn’t concern the user, but may be interesting to others): before interpretation, ngless will transform the input script, adding checks and optimizing it. A new pass (which is only enabled is the user requested version “0.0”), simply transforms the older code into its newer counterpart. Then, the rest of the process proceeds as if the user had typed in the newer version.

Classifying protists into 155 (hierarchically organized) classes

An important component of my recent paper (previous post) on imaging protist (micro-eukaryotes) communities is a classifier that classifies each individual object into one of 155 classes. These classes are organized hierarchically, so that the first level corresponds to living/non-living object; then, if living, classifies it into phyla, and so on. This is the graphical representation we have in the paper:

Using a large training set (>18,000), we built a classifier capable of classifying objects into one these 155 classes with >82%.

What is the ML architecture we use? In the end, we use the traditional system: we compute many features and use a random forest trained on the full 155 classes. Why a random forest?

A random forest should be the first thing you try on a supervised classification problem (and perhaps also the last, lest you overfit). I did spent a few weeks trying different variations on this idea and none of them beat this simplest possible system. Random forests are also very fast to train (especially if you have a machine with many cores, as each tree can be learned independently).

As usual, the features were where the real work went. A reviewer astutely asked whether we really needed so many features (we compute 480 of them). The answer is yes. Even when selecting just the best features (which we wouldn’t know apriori, but let’s assume we had an oracle), it seems that we really do need a lot of features:

(This is Figure 3 — supplement 4: https://elifesciences.org/articles/26066/figures#fig3s4sdata1)

We need at least 200 features and it never really saturates. Furthermore, features are computed in groups (Haralick features, Zernike features, …), so we would not gain much

In terms of implementation, features were computed with mahotas (paper) and machine learning was done with scikit-learn (paper).

§

What about Deep Learning? Could we have used CNNs? Maybe, maybe not. We have a fair amount of data (>18,000 labeled samples), but some of the classes are not as well represented (in the pie chart above, the width of the classes represents how many objects are in the training set). A priori, it’s not clear it would have helped much.

Also, we may already be at the edge of what’s possible. Accuracy above 80% is already similar to human performance (unlike some of the more traditional computer vision problems, where humans perform with almost no mistakes and computers had very high error rates prior to the neural network revolution).

New papers I: imaging environmental samples of micro eukaryotes

This week, I had two first author papers published:

  1. Quantitative 3D-imaging for cell biology and ecology of environmental microbial eukaryotes 
  2. Jug: Software for Parallel Reproducible Computation in Python

I intend to post on both of them over the next week or so, but I will start with the first one.

The basic idea is that just as metagenomics was the application of lab techniques (sequencing) that had been developed for pure cultures to environmental samples, we are moving from imaging cell cultures (the type of work I did during my PhD and shortly afterwards) to imaging environmental samples. These are, thus, mixed samples of microbes (micro-eukaryotes, not bacteria, but remember: protists are microbes too).

Figure 1 from paper

Figure 1 from the paper depicting the process (a) and the results (b & c).

The result is a phenotypic view of the whole community, not just the elements that you can easily grow in the lab. As it is not known apriori which organisms will be present, we use generic eukaryotic dyes, tagging DNA, membranes, and the exterior. In addition, chlorophyll is auto-fluorescence, so we get a free extra channel.

With automated microscopes and automated analysis, we obtained images of 300,000 organisms, which were classified into 155 classes. A simple machine-learning system can perform this classification with 82% accuracy, which is similar to (or better than) the inter-operator variability in similar problems.

The result is both a very large set of images as well as a large set of features, which can be exploited for understanding the microbial community.

No, research does not say that you produce more when working 40 hours per week

Last week, a debate flared up on twitter on working hours in academia and there was the claim that it is irrational to work over 40 hours as output actually goes down. I do not believe this claim.

A few starting notes:

  1. I am happy to be contradicted with data, but too often I see this issue being discussed with links to web articles citing other web articles, finally citing studies which suffer from the issues listed below.
  2. Maximum output at work is traded off against other valid personal goals. It is fine to argue that you prefer to produce less and spend more time with family or have more hobbies. Seriously, it’s a good argument. I just want people to make it instead of claiming a free lunch.
  3. I’m using mIF (mili Impact Factor points) as the unit of academic output below. This is a joke. If you want to talk about the impact factor, we can talk about it, but this is not what this post is about.
  4. I agree that presentialism (i.e., measuring or valuing how long people are present at a job) is an idiotic system (or cultural trait). This is an even worse system than measuring impact factor points. Again, this is not what this post is about.

I mostly think that every time a scientist says “Research shows…” and they’re wrong or using it to boost their political/personal beliefs, then anti-science activists deserve a point.

Measurement is hard

People lie about how much they work. They lie to conform to expectations and lies go in multiple directions. Thus, even though I do think that Americans (on average) work more than Europeans, I also think that Americans exaggerate how much they work and some workaholic Europeans exaggerate how much time they take off.

Cross-country studies will also often impute the legal work hours to workers in different countries even though these may not correspond to hours worked (officially, I work less now than during my PhD, but I actually work way more now).

Even well-meaning self-reports are terribly inaccurate. People count time spent at work even though they spent a lot of it on non-productive activities. It can even be hard to define the boundary between work and non-work. There is obvious work (me, writing a rebuttal letter to reviewers). There is obvious non-work (me, spending 30 minutes in the morning reading the newspaper online sitting at my work desk). But there is a vast grey zone: me, reading about Haskell bioinformatics libraries, or me writing an utility package in my free time that I end up using intensively at work. Often the obviously productive work ends up using ideas from the not-so-obviously productive bits.

This should lead us down the path of distrusting empirical studies. Not completely throwing them out the window, but being careful before claiming that “research shows …”.

It should also lead us to distrust the anecdotal reports of people who say they work 60 hours per week or those who have impressive CVs and claim to work only 35 hours and take long holidays.

What do you mean by productivity?

Often there is a game that is played in these discussions with the word productivity, as it is not always clear whether it refers to output per hour or output per week. For the moment, let’s be strict and say use it in the output per hour sense.

Marginal productivity starts going down well before it turns negative. Thus, if you are optimizing for average productivity, you end up at a lower number of hour than if you are optimizing for total output. Here is what I mean (see an earlier post on the shape of this curve):

productivity.png

Let’s say that academics produce impact factor points (the example goes for most other knowledge work). Because there are fixed time costs in academia (as in almost all knowledge work), the first hours of the week produce 0 IFs. It will depend on the exact situation but 10 hours a week can easily be spent on maintenance work (up to 20 or 30 if one is not careful). Then, the very productive hours produce 15mIF/hour. As more hours are worked, one can become tired, and the additional hours start producing less than 15mIF (thus, marginal productivity is diminishing). As we take it to the extreme, our academic becomes so tired, he cannot produce anything at all or even produces negative IF (for example, by disrupting other people’s projects).

If you are hiring people by the hour, you want them to work to the point where output/hour is optimized, which is the traditional justification for why companies should have shorter work weeks. However, this can be well below the point at which output is maximal.

Looking at some empirical work, it does seem that while the point of productivity inflection is just about 40 hours per week, the point of maximum output is above 50 hours/week.

Screenshot_20170704_174633.png

Thus, if you are managing a widget factory, you may not want your workers working more than 40-45 hours for your own selfish reasons. But this does not mean that this is the point of maximum output.

Anecdotally, it does seem that many people work 40 hours at their main jobs and still engage in either a second lower-paying job or in non-leisure cost-saving activities (with lower implied wages than their main job, although these are untaxed).

Averages hide variances

Again, work that is directed at managers of widget factories is not necessarily a guide to your behaviour. Perhaps some workers peak (in their average productivity) at 30 hours, others 40, still others at 50. If you are managing as a group, go for the average (look at the spread in the empirical plot above).

Maybe this is not where your maximum is. Maybe too, one can train to increase one’s maximum. Maybe your maximum this week is at 20 hours and the next week at 60.

Also, as I write above, many people take either formal second job or undertake secondary cost-saving activities. Often these can be more flexibly scheduled than their main jobs. For example, someone who regularly does a longer trip to a cheaper grocery store to save a few bucks may skip that “second job” in the weeks where they are tired or have good leisure alternatives. Or they may only get around to fixing their own washing machine when they have a few hours without any better things to do.

As free-range knowledge workers, we get all of this flexibility already (remember the old joke that in academia you can work whichever 80 hours of the week you want). Perhaps this already alleviates many of the drawbacks of going above the widget-makers optimum. I certainly know that I enjoy the flexibility and that, while on average, I do work longer weeks, this is not true of every single week.

In a competition, payoffs can be heavily non-linear

It remains a great injustice that even though I can run 100m in just twice as much time as Usain Bolt, I cannot get even a tenth of his pay.

Sports are the extreme case as they are almost pure competition, but they do make the point clear: in competitive fields, just a bit more output can make a huge difference. In science, getting a project finished in 10 months instead of 11 months may be the difference between getting or not getting scooped. A paper that is just slightly better may get accepted while one that neglected that one extra experiment does not. A grant that scores two percentage points higher gets funding. And so on.

Unfortunately, in most cases, we cannot know what would have happened if we had just added that one extra experiment to the paper or submitted the grant without that bit of preliminary data we we collected just before submission. But saying that we can never know is an epistemological argument, the reality still remains that a little extra effort can have a big payout.

Conclusion

I keep reading/hearing this claim that “research shows that you shouldn’t work as much” or that “research shows that 40 hours per week is the best”. It would be good if it were true: it would be a free lunch, but I just do not see that in the research. What I often see is a muddling of the term “productivity” which does not appreciate the difference between maximum avg. output/hour and maximum output/week.

I am happy to be corrected with the right citations, but do make sure that they address the points above.

 

Upcoming Travels

I have quite a bit of upcoming travel. If you, dear Reader, happen to be around any of these events (or just in the same cities), just get in touch and we might be able to get a coffee.

July 2017

  • I will be in Valencia for FEMS 2017, July 9-13
  • I will be in Lisbon on July 20 for LxMLS 2017
  • I will be in Prague on July 21-25 for ECCB/ISMB 2017. I have poster about ngless at BOSC, but I will be around for the whole conference.

September 2017

November 2017

ANN : Diskhash. Disk-based, persistent hash tables

A few weeks ago, I decided to finally scratch an itch I’ve had for a while: I had a few days off from work and implemented a persistent, disk-based, hash table. Funnily enough, I’m now intensively using it at work, but a priori it felt more like a side project than a work one (it’s often a fuzzy border).

A disk based hashtable

The idea is very simple: it’s a basic hash table which is run on mmap()ed memory so that it can be loaded from disk with a single system call. I’ve heard this type of system to be referred to as “baked data”: you build structures in memory that can be written from/to disk without any need for parsing/converting.

I implemented it all in C (because it is the lowest-common denominator), but there are interfaces in C++, Python, and Haskell. The disk format is fixed, so all these interfaces can work with the same tables. You can jump to the bottom of the post to see code examples. 

Performance

My usage is mostly to build the hashtable once and then reuse it many times. Several design choices reflect this bias and so does performance. Building the hash table can take a while. A big (roughly 1 billion entries) table took almost 1 hour to build. This compares to about 10 minutes for building a Python hashtable of the same size.

On disk, this table takes up 32GB (just the keys and data use up 21GB so I find the overhead acceptable). This compares with almost 200GB for the Python version. Additionally, several processes on the same machine can share the memory map (the operating system will do this automatically for you), further reducing memory usage when more than one process is running.

Using the C++ interface, I measured lookups as taking circa 10-20 microseconds per lookup. When doing the same from Python, it takes 400-800 microseconds. The big difference depends on whether the cache is hot or cold (doing the same lookup twice is much faster than two different lookups as the memory is already in cache). A raw Python hash table takes ca. 40 microseconds. My guess is that the extra overhead of diskhash in Python is boxing/unboxing of types, while the Python version uses boxed types (which is also responsible for the extra memory usage). Still, this is very acceptable.

Design

The format on disk is pretty simple:

[HEADER]
    - magic number (versioned)
    - options
    - size of table
    - number of used slots
[TABLE OF INDICES]
    - integer indices into data table [with value 0 representing NULL and other indices in 1-based format]
[DATA TABLE]
    - [key/value] pairs

The format on disk is the same as the format on memory, thus loading is simply calling mmap(). Conflicts are handled using linear indexing (table load is kept at <50%). When it is necessary to expand the table, a completely new table is built (that is 1.7x as large as the current one), all the elements are inserted into this table and, then, we switch to that table. This can be quite expensive, but is amortized so, insertions are still O(1) and it is possible to pre-allocate a large table if desired.

The indirection (there is a table of indices pointing to a data table) keeps disk space down at the cost of an extra step (and probably an extra memory access) at lookup time. The code is smart enough to switch from 32 to 64 bit indices as the table grows.

There is currently no support for deleting keys.

Experience coding this

C is a pain, but compiling C is fast

I had actually not written any C code in many years. I often use C++, but raw C code is very different. Making sure that the every cleanup path is correct leads to a lot of boilerplate and copy&pasting. Without exceptions and destructors, checking the return value of all functions that we call is a pain. It is not hard, but it sure is tedious.

One thing that was very cool is how fast compilation is. The first time I ran gcc, I thought there must have been something wrong as the command was instantaneous.

Nope, compilation of the library and the test driver takes <0.2s (slightly slower if you use optimizations; it goes all the way up to 0.3s).

This means that compiling and running C is about as fast as starting an interpreter.

Writing a disk based hash is easy, packaging the code is hard

The two hardest things in computer science are not naming things or cache invalidation but installing packages on Linux and solving packaging errors.

I first wrote a Python wrapper using ctypes, but while it was trivial to write and it worked well, I could not find a way to package it. Finally, I decided it was easier to just use the raw C API instead of figuring out how to convince setuptools to do what I wanted.

The haskell packaging was slightly easier, but it still required a few tries until all the right files were correctly included in the package (which is why there were 3 releases until it worked: the code is the same, it was just me fiddling with packaging).

Examples

The following examples all create a hashtable to store longs (int64_t), then set the value associated with the key "key" to 9. In the current API, the maximum size of the keys needs to be pre-specified, which is the value 15 below.

Raw C

#include <stdio.h>
#include <inttypes.h>
#include "diskhash.h"

int main(void) {
    HashTableOpts opts;
    opts.key_maxlen = 15;
    opts.object_datalen = sizeof(int64_t);
    char* err = NULL;
    HashTable* ht = dht_open("testing.dht", opts, O_RDWR|O_CREAT, &err);
    if (!ht) {
        if (!err) err = "Unknown error";
        fprintf(stderr, "Failed opening hash table: %s.\n", err);
        return 1;
    }
    long i = 9;
    dht_insert(ht, "key", &i);
    
    long* val = (long*) dht_lookup(ht, "key");
    printf("Looked up value: %l\n", *val);

    dht_free(ht);
    return 0;
}

Haskell

In Haskell, you have different types/functions for read-write and read-only hashtables.

Read write example:

import Data.DiskHash
import Data.Int
main = do
    ht <- htOpenRW "testing.dht" 15
    htInsertRW ht "key" (9 :: Int64)
    val <- htLookupRW "key" ht
    print val

Read only example (htLookupRO is pure in this case):

import Data.DiskHash
import Data.Int
main = do
    ht <- htOpenRO "testing.dht" 15
    let val :: Int64
        val = htLookupRO "key" ht
    print val

Python

Python’s interface is more limited and only integers are supported as values in the hash table (they are stored as 64-bit integers).

import diskhash
tb = diskhash.Str2int("testing.dht", 15)
tb.insert("key", 9)
print(tb.lookup("key"))

The Python interface is currently Python 3 only. Patches to extend it to 2.7 are welcome, but it’s not a priority.

C++

In C++, a simple wrapper is defined, which provides a modicum of type-safety. You use the DiskHash<T> template. Additionally, errors are reported through exceptions (both std::bad_alloc and std::runtime_error can be thrown) and not return codes.

#include <iostream>
#include <string>

#include <diskhash.hpp>

int main() {
    const int key_maxlen = 15;
    dht::DiskHash<uint64_t> ht("testing.dht", key_maxlen, dht::DHOpenRW);
    std::string line;
    uint64_t ix = 0;
    while (std::getline(std::cine, line)) {
        if (line.length() > key_maxlen) {
            std::cerr << "Key too long: '" << line << "'. Aborting.\n";
            return 2;
        }
        const bool inserted = ht.insert(line.c_str(), ix);
        if (!inserted) {
            std::cerr  << "Found repeated key '" << line << "' (ignored).\n";
        }
        ++ix;
    }
    return 0;
}