No, research does not say that you produce more when working 40 hours per week

Last week, a debate flared up on twitter on working hours in academia and there was the claim that it is irrational to work over 40 hours as output actually goes down. I do not believe this claim.

A few starting notes:

  1. I am happy to be contradicted with data, but too often I see this issue being discussed with links to web articles citing other web articles, finally citing studies which suffer from the issues listed below.
  2. Maximum output at work is traded off against other valid personal goals. It is fine to argue that you prefer to produce less and spend more time with family or have more hobbies. Seriously, it’s a good argument. I just want people to make it instead of claiming a free lunch.
  3. I’m using mIF (mili Impact Factor points) as the unit of academic output below. This is a joke. If you want to talk about the impact factor, we can talk about it, but this is not what this post is about.
  4. I agree that presentialism (i.e., measuring or valuing how long people are present at a job) is an idiotic system (or cultural trait). This is an even worse system than measuring impact factor points. Again, this is not what this post is about.

I mostly think that every time a scientist says “Research shows…” and they’re wrong or using it to boost their political/personal beliefs, then anti-science activists deserve a point.

Measurement is hard

People lie about how much they work. They lie to conform to expectations and lies go in multiple directions. Thus, even though I do think that Americans (on average) work more than Europeans, I also think that Americans exaggerate how much they work and some workaholic Europeans exaggerate how much time they take off.

Cross-country studies will also often impute the legal work hours to workers in different countries even though these may not correspond to hours worked (officially, I work less now than during my PhD, but I actually work way more now).

Even well-meaning self-reports are terribly inaccurate. People count time spent at work even though they spent a lot of it on non-productive activities. It can even be hard to define the boundary between work and non-work. There is obvious work (me, writing a rebuttal letter to reviewers). There is obvious non-work (me, spending 30 minutes in the morning reading the newspaper online sitting at my work desk). But there is a vast grey zone: me, reading about Haskell bioinformatics libraries, or me writing an utility package in my free time that I end up using intensively at work. Often the obviously productive work ends up using ideas from the not-so-obviously productive bits.

This should lead us down the path of distrusting empirical studies. Not completely throwing them out the window, but being careful before claiming that “research shows …”.

It should also lead us to distrust the anecdotal reports of people who say they work 60 hours per week or those who have impressive CVs and claim to work only 35 hours and take long holidays.

What do you mean by productivity?

Often there is a game that is played in these discussions with the word productivity, as it is not always clear whether it refers to output per hour or output per week. For the moment, let’s be strict and say use it in the output per hour sense.

Marginal productivity starts going down well before it turns negative. Thus, if you are optimizing for average productivity, you end up at a lower number of hour than if you are optimizing for total output. Here is what I mean (see an earlier post on the shape of this curve):

productivity.png

Let’s say that academics produce impact factor points (the example goes for most other knowledge work). Because there are fixed time costs in academia (as in almost all knowledge work), the first hours of the week produce 0 IFs. It will depend on the exact situation but 10 hours a week can easily be spent on maintenance work (up to 20 or 30 if one is not careful). Then, the very productive hours produce 15mIF/hour. As more hours are worked, one can become tired, and the additional hours start producing less than 15mIF (thus, marginal productivity is diminishing). As we take it to the extreme, our academic becomes so tired, he cannot produce anything at all or even produces negative IF (for example, by disrupting other people’s projects).

If you are hiring people by the hour, you want them to work to the point where output/hour is optimized, which is the traditional justification for why companies should have shorter work weeks. However, this can be well below the point at which output is maximal.

Looking at some empirical work, it does seem that while the point of productivity inflection is just about 40 hours per week, the point of maximum output is above 50 hours/week.

Screenshot_20170704_174633.png

Thus, if you are managing a widget factory, you may not want your workers working more than 40-45 hours for your own selfish reasons. But this does not mean that this is the point of maximum output.

Anecdotally, it does seem that many people work 40 hours at their main jobs and still engage in either a second lower-paying job or in non-leisure cost-saving activities (with lower implied wages than their main job, although these are untaxed).

Averages hide variances

Again, work that is directed at managers of widget factories is not necessarily a guide to your behaviour. Perhaps some workers peak (in their average productivity) at 30 hours, others 40, still others at 50. If you are managing as a group, go for the average (look at the spread in the empirical plot above).

Maybe this is not where your maximum is. Maybe too, one can train to increase one’s maximum. Maybe your maximum this week is at 20 hours and the next week at 60.

Also, as I write above, many people take either formal second job or undertake secondary cost-saving activities. Often these can be more flexibly scheduled than their main jobs. For example, someone who regularly does a longer trip to a cheaper grocery store to save a few bucks may skip that “second job” in the weeks where they are tired or have good leisure alternatives. Or they may only get around to fixing their own washing machine when they have a few hours without any better things to do.

As free-range knowledge workers, we get all of this flexibility already (remember the old joke that in academia you can work whichever 80 hours of the week you want). Perhaps this already alleviates many of the drawbacks of going above the widget-makers optimum. I certainly know that I enjoy the flexibility and that, while on average, I do work longer weeks, this is not true of every single week.

In a competition, payoffs can be heavily non-linear

It remains a great injustice that even though I can run 100m in just twice as much time as Usain Bolt, I cannot get even a tenth of his pay.

Sports are the extreme case as they are almost pure competition, but they do make the point clear: in competitive fields, just a bit more output can make a huge difference. In science, getting a project finished in 10 months instead of 11 months may be the difference between getting or not getting scooped. A paper that is just slightly better may get accepted while one that neglected that one extra experiment does not. A grant that scores two percentage points higher gets funding. And so on.

Unfortunately, in most cases, we cannot know what would have happened if we had just added that one extra experiment to the paper or submitted the grant without that bit of preliminary data we we collected just before submission. But saying that we can never know is an epistemological argument, the reality still remains that a little extra effort can have a big payout.

Conclusion

I keep reading/hearing this claim that “research shows that you shouldn’t work as much” or that “research shows that 40 hours per week is the best”. It would be good if it were true: it would be a free lunch, but I just do not see that in the research. What I often see is a muddling of the term “productivity” which does not appreciate the difference between maximum avg. output/hour and maximum output/week.

I am happy to be corrected with the right citations, but do make sure that they address the points above.

 

Advertisements

Upcoming Travels

I have quite a bit of upcoming travel. If you, dear Reader, happen to be around any of these events (or just in the same cities), just get in touch and we might be able to get a coffee.

July 2017

  • I will be in Valencia for FEMS 2017, July 9-13
  • I will be in Lisbon on July 20 for LxMLS 2017
  • I will be in Prague on July 21-25 for ECCB/ISMB 2017. I have poster about ngless at BOSC, but I will be around for the whole conference.

September 2017

November 2017

ANN : Diskhash. Disk-based, persistent hash tables

A few weeks ago, I decided to finally scratch an itch I’ve had for a while: I had a few days off from work and implemented a persistent, disk-based, hash table. Funnily enough, I’m now intensively using it at work, but a priori it felt more like a side project than a work one (it’s often a fuzzy border).

A disk based hashtable

The idea is very simple: it’s a basic hash table which is run on mmap()ed memory so that it can be loaded from disk with a single system call. I’ve heard this type of system to be referred to as “baked data”: you build structures in memory that can be written from/to disk without any need for parsing/converting.

I implemented it all in C (because it is the lowest-common denominator), but there are interfaces in C++, Python, and Haskell. The disk format is fixed, so all these interfaces can work with the same tables. You can jump to the bottom of the post to see code examples. 

Performance

My usage is mostly to build the hashtable once and then reuse it many times. Several design choices reflect this bias and so does performance. Building the hash table can take a while. A big (roughly 1 billion entries) table took almost 1 hour to build. This compares to about 10 minutes for building a Python hashtable of the same size.

On disk, this table takes up 32GB (just the keys and data use up 21GB so I find the overhead acceptable). This compares with almost 200GB for the Python version. Additionally, several processes on the same machine can share the memory map (the operating system will do this automatically for you), further reducing memory usage when more than one process is running.

Using the C++ interface, I measured lookups as taking circa 10-20 microseconds per lookup. When doing the same from Python, it takes 400-800 microseconds. The big difference depends on whether the cache is hot or cold (doing the same lookup twice is much faster than two different lookups as the memory is already in cache). A raw Python hash table takes ca. 40 microseconds. My guess is that the extra overhead of diskhash in Python is boxing/unboxing of types, while the Python version uses boxed types (which is also responsible for the extra memory usage). Still, this is very acceptable.

Design

The format on disk is pretty simple:

[HEADER]
    - magic number (versioned)
    - options
    - size of table
    - number of used slots
[TABLE OF INDICES]
    - integer indices into data table [with value 0 representing NULL and other indices in 1-based format]
[DATA TABLE]
    - [key/value] pairs

The format on disk is the same as the format on memory, thus loading is simply calling mmap(). Conflicts are handled using linear indexing (table load is kept at <50%). When it is necessary to expand the table, a completely new table is built (that is 1.7x as large as the current one), all the elements are inserted into this table and, then, we switch to that table. This can be quite expensive, but is amortized so, insertions are still O(1) and it is possible to pre-allocate a large table if desired.

The indirection (there is a table of indices pointing to a data table) keeps disk space down at the cost of an extra step (and probably an extra memory access) at lookup time. The code is smart enough to switch from 32 to 64 bit indices as the table grows.

There is currently no support for deleting keys.

Experience coding this

C is a pain, but compiling C is fast

I had actually not written any C code in many years. I often use C++, but raw C code is very different. Making sure that the every cleanup path is correct leads to a lot of boilerplate and copy&pasting. Without exceptions and destructors, checking the return value of all functions that we call is a pain. It is not hard, but it sure is tedious.

One thing that was very cool is how fast compilation is. The first time I ran gcc, I thought there must have been something wrong as the command was instantaneous.

Nope, compilation of the library and the test driver takes <0.2s (slightly slower if you use optimizations; it goes all the way up to 0.3s).

This means that compiling and running C is about as fast as starting an interpreter.

Writing a disk based hash is easy, packaging the code is hard

The two hardest things in computer science are not naming things or cache invalidation but installing packages on Linux and solving packaging errors.

I first wrote a Python wrapper using ctypes, but while it was trivial to write and it worked well, I could not find a way to package it. Finally, I decided it was easier to just use the raw C API instead of figuring out how to convince setuptools to do what I wanted.

The haskell packaging was slightly easier, but it still required a few tries until all the right files were correctly included in the package (which is why there were 3 releases until it worked: the code is the same, it was just me fiddling with packaging).

Examples

The following examples all create a hashtable to store longs (int64_t), then set the value associated with the key "key" to 9. In the current API, the maximum size of the keys needs to be pre-specified, which is the value 15 below.

Raw C

#include <stdio.h>
#include <inttypes.h>
#include "diskhash.h"

int main(void) {
    HashTableOpts opts;
    opts.key_maxlen = 15;
    opts.object_datalen = sizeof(int64_t);
    char* err = NULL;
    HashTable* ht = dht_open("testing.dht", opts, O_RDWR|O_CREAT, &err);
    if (!ht) {
        if (!err) err = "Unknown error";
        fprintf(stderr, "Failed opening hash table: %s.\n", err);
        return 1;
    }
    long i = 9;
    dht_insert(ht, "key", &i);
    
    long* val = (long*) dht_lookup(ht, "key");
    printf("Looked up value: %l\n", *val);

    dht_free(ht);
    return 0;
}

Haskell

In Haskell, you have different types/functions for read-write and read-only hashtables.

Read write example:

import Data.DiskHash
import Data.Int
main = do
    ht <- htOpenRW "testing.dht" 15
    htInsertRW ht "key" (9 :: Int64)
    val <- htLookupRW "key" ht
    print val

Read only example (htLookupRO is pure in this case):

import Data.DiskHash
import Data.Int
main = do
    ht <- htOpenRO "testing.dht" 15
    let val :: Int64
        val = htLookupRO "key" ht
    print val

Python

Python’s interface is more limited and only integers are supported as values in the hash table (they are stored as 64-bit integers).

import diskhash
tb = diskhash.Str2int("testing.dht", 15)
tb.insert("key", 9)
print(tb.lookup("key"))

The Python interface is currently Python 3 only. Patches to extend it to 2.7 are welcome, but it’s not a priority.

C++

In C++, a simple wrapper is defined, which provides a modicum of type-safety. You use the DiskHash<T> template. Additionally, errors are reported through exceptions (both std::bad_alloc and std::runtime_error can be thrown) and not return codes.

#include <iostream>
#include <string>

#include <diskhash.hpp>

int main() {
    const int key_maxlen = 15;
    dht::DiskHash<uint64_t> ht("testing.dht", key_maxlen, dht::DHOpenRW);
    std::string line;
    uint64_t ix = 0;
    while (std::getline(std::cine, line)) {
        if (line.length() > key_maxlen) {
            std::cerr << "Key too long: '" << line << "'. Aborting.\n";
            return 2;
        }
        const bool inserted = ht.insert(line.c_str(), ix);
        if (!inserted) {
            std::cerr  << "Found repeated key '" << line << "' (ignored).\n";
        }
        ++ix;
    }
    return 0;
}

I tried Haskell for 5 years and here’s how it was

One blogpost style which I find almost completely useless is “I tried Programming Language X for 5 days and here’s how it was.” Most of the time, the first impression is superficial discussing syntax and whether you could get Hello World to run.

This blogpost is I tried Haskell for 5 years and here’s how it was.

In the last few years, I have been (with others) developing ngless, a domain specific language and interpreter for next-generation sequencing. For partly accidental reasons, the interpreter is written in Haskell. Even though I kept using other languages (most Python and C++), I have now used Haskell quite extensively for a serious, medium-sized project (11,270 lines of code). Here are some scattered notes on Haskell:

There is a learning curve

Haskell is a different type of language. It takes a while to fully get used to it if you’re coming from a more traditional background.

I have debugged code in Java, even though I never really learned (or wrote) any Java. Java is just a C++ pidgin language.

The same is not true of Haskell. If you have never looked at Haskell code, you may have difficulty following even simple functions.

Once you learn it, though, you get it.

Haskell has some very nice libraries

You really have very nice libraries, written by people doing really useful things.

Conduit and Parsec are the basis of a lot of ngless code.

Here is an excellent curated list of Haskell library world (added May 4)

Haskell libraries are sometimes hard to figure out

I like to think that you need both hard documentation and soft documentation.

Hard documentation is where you describe every argument to a function and its effects. It is like a reference work (think of man pages). Soft documentation are tutorials and examples and more descriptive text. Well documented software and libraries will have both (there no need for anything in between, I don’t want soft serve documentation).

Haskell libraries often have extremely hard documentation: they will explain the details of functions, but little in the way of soft documentation. This makes it very hard to understand why a function could be useful in the first place and in which contexts to use this library.

This is exacerbated by the often extremely abstract nature of some of the libraries. Case in point, is the very useful MonadBaseControl class. Trust me, this is useful. However, because it is so generic, it is hard to immediately grasp what it does.

I do not wish to over-generalized. Conduit, mentioned above, has tutorials, blogposts, as well as hard documentation.

Haskell sometimes feels like C++

Like C++, Haskell is (in part) a research project with a single initial Big Idea and a few smaller ones. In Haskell’s case, the Big Idea was purely functional lazy evaluation (or, if you want to be pedantic, call it “non-strict” instead of lazy). In C++’s case, the Big Idea was high level object orientation without loss of performance compared to C.

Both C++ and Haskell are happy to incorporate academic suggestions into real-world computer languages. This doesn’t need elaboration in the case of Haskell, but C++ has also been happy to be at the cutting edge. For example, 20 years ago, you could already use C++ templates to perform (limited) programming with dependent types. C++ really pioneered the mechanism of generics and templates.

Like C++, Haskell is a huge language, where there are many ways to do something. You have multiple ways to represent strings, you have accidents of history kept for backwards compatibility. If you read an article from 10 years ago about the best way to do something in the language, that article is probably outdated by two generations.

Like C++, Haskell’s error messages take a while to get used to.

Like C++, there is a tension in the community between the purists and the practitioners.

Performance is hard to figure out

Haskell and GHC generally let me get good performance, but it is not always trivial to figure out a priori which code will run faster and in less memory.

In some trivial sense, you always depend on the compiler to make your code faster (i.e., if the compiler was infinitely smart, any two programs that produce the same result would compile to the same highly efficient code).

In practice, of course, compilers are not infinitely smart and so there faster and slower code. Still, in many languages you can look at two pieces of code and reasonably guess which one will be faster, at least within an order of magnitude.

Not so with Haskell. Even very smart people struggle with very simple examples. This is because the most generic implementation of the code tends to be very inefficient. However, GHC can be very smart and make your software very fast. This works 90% of the time, but sometimes you write code that does not trigger all the right optimizations and your function suddenly becomes 1,000x slower. I have once or twice written two almost identical versions of a function with large differences in performance (orders of magnitude).

This leads to the funny situation that Haskell is (partially correctly) seen as an academic language used by purists obsessed with elegance; while in practice, a lot of effort goes into making the code written as compiler-friendly as possible.

For the most part, though, this is not a big issue. Most of the code will run just fine and you optimize the inner loops at the end (just like in any other language), but it’s a pitfall to watch out for.

The easy is hard, the hard is easy

For minor tasks (converting between two file formats, for example), I will not use Haskell; I’ll do it Python: It has a better REPL environment, no need to set up a cabal file, it is easier to express simple loops, &c. The easy things are often a bit harder to do in Haskell.

However, in Haskell, it is trivial to add some multithreading capability to a piece of code with complete assurance of correctness. The line that if it compiles, it’s probably correct is often true.

Stack changed the game

Before stack came on the game, it was painful to make sure you had all the right libraries installed in a compatible way. Since stack was released, working in Haskell really has become much nicer. Tooling matters.

The really big missing piece is the equivalent of ccache for Haskell.

Summary

Haskell is a great programming language. It requires some effort at the beginning, but you get to learn a very different way of thinking about your problems. At the same time, the ecosystem matured significantly (hopefully signalling a trend) and the language can be great to work with.

When you say you are pro-science, what do you mean you are in favor of?

In the last few weeks, with the March for Science coming up, there have been a few discussion of what being pro-science implies. I just want to ask

When you say you are pro-science, what do you mean you are in favor of?

Below, I present a few different answers.

Science as a set of empirically validated statements

Science can mean facts such as:

  • the world is steadily warming and will continue to do as CO2 concentrations go up
  • nuclear power is safe
  • vaccines are safe
  • GMOs are safe

This is the idea behind the there is no alternative to facts rhetoric. The four statements above can be quibbled with (there are some risks to some vaccines, GMO refers to a technique not a product so even calling it safe or unsafe is not the right discussion, nuclear accidents have happened, and there is a lot of uncertainty on both the amount of warming and its downstream effects), but, when understood in general terms, they are facts and those who deny them, deny reality.

When people say that science is not political, they mean that these facts are independent of one’s values. I’d add the Theory of Evolution to the above four, but evolution (like Quantum Mechanics or Relativity) is even more undeniable.

Science and technology as a positive force

The above were “value-less facts”; let’s slowly get into values.

The facts above do not have any consequences for policy or behaviour on their own. They do constrain the set of possible outcomes, but for a decision, you need a set of values on top.

It’s still perfectly consistent with the facts and claim the following: Vaccines are safe, but a person’s bodily autonomy cannot be breached in the name of utilitarianism. In the case of children, the parents’ autonomy should be paramount. This is a perfectly intellectually consistent libertarian position. As long as you are willing to accept that children will die as a consequence, then I cannot really say you are denying the scientific evidence. This may seem a shocking trade-off when said out loud but it also happens to be the de facto policy of the Western world for the 10-20 past years: vaccines are recommended, but most jurisdictions will not enforce them anymore.

Similar statements can be made about all of the above:

  • The world is getting warmer, but fossil fuels bring human beings wealth and so, are worth the price to the natural environment. The rest should be dealt with mitigation and geo-engineering. What is important is finding the lowest cost solution for people.
  • Nuclear power is safe, but storing nuclear waste destroys pristine environments and that is a cost not worth paying.
  • GMOs are safe, but messing with Nature/God’s work is immoral.

Empirical facts can provide us with the set of alternatives that are possible, but do not help us weigh alternatives against each other (note how often cost/benefit shows up in the above, but the costs are not all material costs). Still, often being pro-science is understood as being pro technological progress and, thus, anti-GMO or anti-nuclear activism is anti-science.

Science as a community and set of practices

This meaning of “being pro-Science”, science as the community of scientists, is also what leads to views such as being pro-Science means being pro-inclusive Science. Or, on the other side, bringing up Dr. Mengele.

Although it is true that empirically validated facts are shared across humanity, there are areas of knowledge that impact certain people more than others. If there is no effort to uncover the mechanisms underlying a particular disease that affect people in poorer parts of the world, then the efforts of scientists will have a differential impact in the world.

Progress in war is fueled by science as much as progress in any other area and scientists have certainly played (and continue to play) important roles in figuring out ways of killing more people faster and cheaper.

The scientific enterprise is embedded in the societies around it and has, indeed, in the past resorted to using slaves or prisoners. Even in the modern enlightened world, the scientific community has had its share of unethical behaviours, in ways both big and small.

To drive home the point: does supporting science mean supporting animal experiments? Obviously, yes, if you mean supporting the scientific enterprise as it exists. And, obviously, no, if it means supporting empirically validated statements!

The cluster of values that scientists typically share

Scientists tend to share a particular set of values (at least openly). We are pro-progress (in technological and social sense), socially liberal, cosmopolitan, and egalitarian. This is the view behind science is international and people sharing photos of their foreign colleagues on social media.

There is nothing empirically grounded of why these values would be better than others, except that they seem to be statistically more abundant in the minds of professional scientists. Some of this may really be explained by the fact that open minded people will both like science and share this type of values, but a lot of it is more arbitrary. Some of it is selection: given the fact that the career mandates travel and the English language, there is little appeal to individuals who prefer a more rooted life. Some of it is socialization (spend enough time in a community where these values are shared and you’ll start to share them). Some of it is preference falsification (in response to PC, people are afraid to come out and say what they really believe).

In any case, we must recognition that there is no objective sense in which these values are better than the alternative. Note that I do share them. If anything, their arbitrariness is often salient to me because I am even more cosmopolitan than the average scientist, so I see how the barrier between the “healthy nationalism” that is accepted and the toxic variety is a pretty arbitrary line in the sand.

What is funny too is that science is often funded exactly for the opposite reasons: It’s a prestige project for countries to show themselves superior to others, like funding the arts, or the Olympics team. (This is not the only reason to fund science, but it is certainly one of the reasons). You also hear it in Science is what made America great.

Science as an interest group

Science can be an interest group like any other: we want more subsidies & lower taxes (although there is little room for improvement there: most R&D is already tax-exempt). We want to get rid of pesky regulation, and the right to self-regulate (even though there is little evidence that self-regulation works). Science is an interest group.

Being pro-science

All these views of “What do I mean when I am pro-science?” interact and blend into each other: a lot of the resistance to things like GMOs does come from an empirically wrong view of the world and correcting this view thus assuage concerns about GMOs. Similarly, if you accept that science generally results in good things, you will be more in favor of funding it.

Sometimes, though, they diverge. The libertarian view that mixes a strong empiricism and defense of empirically validated facts with an opposition to public funding of science is a minority overall, but over-represented in certain intellectual circles.

On the other hand, I have met many people who support science as a force for progress and as an interest group, but who end up defending all sorts of pseudo-scientific nonsense and rejecting the consensus on the safety of nuclear power or GMOs. This is why I work at a major science institution whose health insurance covers homeopathy: the non-scientific staff will say they are pro-science, but will cherish their homeopathic “remedies”. I also suspect that many people declare themselves as pro-science because they see it as their side versus the religious views they disagree with, even though you can perfectly well be religious and pro-science in accepting the scientific facts.  I would never claim that Amish people are pro-progress and I hazard no guess on their views on public-science funding, but many are happy to grow GMOs as they accept the empirical fact of their safety. In that sense, they are more pro-science than your typical Brooklyn hipster.

Sometimes, these meanings of being pro-science blend into each other by motivated reasoning. So, instead of saying that vaccines are so overwhelmingly safe and that herd immunity is so important that I support mandating them (my view), I can be tempted to say “there is zero risk from vaccines” (which is not true for every vaccine, but I sure wish it were). I can be tempted to downplay the uncertainty about the harder-to-disentangle areas of economic policy and cite the empirical studies that agree with my ideology, and to call those who disagree “anti-scientific.” I might deny that values even come into play at all. We like to pretend there are no trade-offs. This is why anti-GMO groups often start by discussing intellectual property and land-use issues and end up trying to shut down high-school science biology classes.

In an ideal world, we’d reserve the opprobrium of “being anti-science” for those who deny empirical facts and well-validated theories, while discussing all the other issues as part of the traditional political debates (is it worth investing public money in science or should we invest more in education and new public housing? or lowering taxes?). In the real world, we often borrow credibility from empiricism to support other values. The risk, however, is that, what we borrow, we often have to pay back with interest.

What surprised me in 2016

2016 made me reassess an important component of my view of the world. No, not Brexit or Trump becoming President (although, it’s not unrelated).

At the end of 2016, I realized that almost all psychology is pseudo-science. Not hyperbole, not oversold, but pseudo-science.

People used to joke that Parapsychology is the control group for science: i.e., a bunch of people ostentatiously following the scientific method in a situation where every result should come out negative. It’s a null field: the null hypothesis (that there is no effect) is true. Thus, the fact that you can still get positive effects should be worrisome. Turns out the true joke was that psychology is the true control group. Parapsychology was a bad control as most scientists were already predisposed to disbelieve them. Psychology is a much better control.

I had heard of the “Replication Crisis” before, but had not delved into the details. I thought psychology was like microbiome studies: over-hyped but, fundamentally, correct. We may see reports that the microbiome makes you be rude to your uber driver or whatever silly effect. We often read about the effects of the microbiome on obesity, as if it didn’t matter that our diets are not as healthy as they should be and it was all down to microbes. Jonathan Eisen collects these as overselling the microbiome.

Still, to say that people oversell the microbiome is not to say that there is no effect. The microbes do not single-handedly cause obesity, but they have an impact on the margin (a few BMI points up or down), which is enough to be significant for the population. They may not cause nor cure cancer, but they seem to influence the effect of immunotherapy enough that we may need to adjust dosages/drug combinations. And so on…

I thought that when it came to psychology, the same was true: sure, a lot of hype, but I thought there was a there there. There isn’t.

My basic mistake was that I had shared Daniel Kahneman’s view of the situation:

My position […] was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change.

This was exactly my position until I read this long Andrew Gelman post. Since then, I started to read up on this and find that psychology (as a field) has sold us a bill of goods.

(Computer-programming) language wars a bit silly, but not irrational

I don’t know where I heard it (and it was probably not first hand) the
observation of how weird it is that in the 21st century computer professionals
segregate by the language they use to talk to the machine. It just seems silly, doesn’t it?

Programming language discussions (R vs Python for data science, C++ or Python
for computer vision, Java or C# or Ruby for webapps, …) are a stable of
geekdom and easy to categorize as silly. In this short post, I’ll argue that
that while silly they are not completely irrational.

Programming languages are mostly about tooling

Some languages are better than others, but most of what it matters is not
whether the language itself is any good, but how large the ecosystem around it
is. You can have a perfect language, but if there is no support for it in your
favorite editor/IDE, no good HTTPS libraries which can handle HTTP2.0, then
working in it will be efficient or even less pleasant than working in Java. On
the other hand, PHP is a terrible terrible language, but its ecosystem is (for
its limited domain) very nice. R is a slightly less terrible version of this: not a great language, but a lot of nice libraries and a good culture of documentation.

Haskell is a pretty nice programming language, but working in it got much nicer
once stack appeared on the scene. The
language is the same, even the set of libraries is the same, but having a
better way to install packages is enough to fundamentally change your
experience.

On the other hand, Haskell is (still?) enough of a niche language than nobody
has yet written a tool comparable to ccache for
the C/C++ world (instantaneous rebuilds are amazing for a compiled language).

The value of your code increases if you program in a popular language

This is not strictly true: if the work is self-contained, then it may be very
useful on its own even if you wrote it in COBOL, but often the more people can
build upon your work, the more valuable that work is. So if your work is
written in C or Python as opposed to Haskell or Ada, everything else being
equal, it will be more valuable (not everything else is equal, though).

This is somewhat field-dependent. Knowing R is great if you’re a
bioinformatician, but almost useless if you’re writing webserver code. Even
general-purpose languages get niches based on history and tools. Functional
programming languages somehow seems to be more popular in the financial sector
than in other fields (R has a lot of functional elements, but is not typically
thought of as a functional language; probably because functional languages are
“advanced” and R is “for beginners”).

Still, a language that is popular in its field will make your own code more
valuable. Packages upon which you depend will be more likely to be maintained,
tools will improve. If you release a package yourself, it will be more used
(and, if you are in science, maybe even cited).

Changing languages is easy, but costly

Any decent programmer can “pick up” a new language in a few days. I can
probably even debug code in any procedural language even without having ever
seen it before. However, to really become proficient, it often takes much
longer: you need to encounter and internalize the most natural way to do things
in the new language, the quirks of the interpreter/compiler, learn about
different libraries and tools, &c. None of this is “hard”, but it all takes a
long time.

Programming languages have network effects

This is all a different way of saying that programming languages have network
effects
. Thus, if I use language X, it is generally better for me if others
also use it. Not always explicitly, but I think this is the rationale for the programming language discussions.