The Hard Part is Motivation. Books. &c

Building Machine Learning Systems with Python

Because my book just came out, I am again excerpting from a Portuguese interview with me I mentioned previously:

Who is the target audience for this book?

Luis Pedro: There are two distinct audiences: The first are programmers who do not know much of machine learning, but liked to use a classifier, for example. The second are people who do not need an introduction to machine learning (because they already know), but maybe do not know how to do it with the tools in Python.

What were the main difficulties encountered in this process?

Luis Pedro: The challenge is always to find examples that are not too easy, but they are not also too difficult for an introduction. In one case (image classification), I took some photographs to create a dataset. In the literature, there are classical examples which are trivial to handle with modern techniques and current research problems that are very difficult. My dataset consists of poorly framed photographs taken with a mobile phone camera. Therefore, it also has a style more akin to a problem area or mobile web. The aim is to distinguish photographs of buildings, natural landscape, or texts.

There was nothing too complex for anyone who knows the area. In fact, when it comes to writing about something you already know how to do (as was the case), the hard part is the motivation.

[…]

One thing we stress throughout is principled evaluation and we warn against overselling your results. Even in the world of research, we still find papers that mix training set and test set! Obviously not coming from groups that work in machine learning, but in more applied areas, we still find people who test hyper-parameters in the test set.

Building Machine Learning Systems with Python

I wrote a book. Well, only in part. Willi Richert and I wrote a book.

It is called Building Machine Learning Systems With Python and is now available from Amazon (or Amazon.co.uk), although it has already been partially available directly from the publisher for a while (in a form where you get chapters as editing is finished).

§

The book is an introduction to using machine learning in Python.

We mostly rely on scikit-learn, which is the most complete package for machine learning in Python. I do prefer my own code for my own projects, but milk is not as complete. It has stuff that scikit-learn does not (and stuff they have, correctly, appropriated).

We try to cover all the major modes in machine learning and, in particular, have:

  1. classification
  2. regression
  3. clustering
  4. dimensionality reduction
  5. topic modeling

and also, towards the end, three more applied chapters:

  1. classification of music
  2. pattern recognition in images
  3. using jug for parallel processing (including in the cloud).

§

The approach is tutorial-like, without much math but lots of code examples.

This should get people started and will be more than enough if the problem is easy (and there are still many easy problems out there). With good features (which are problem-specific, anyway) knowing how to run an SVM will very often be enough.

Lest you fear we are giving people enough just enough knowledge to be dangerous, we stress correct evaluation of the results throughout the book. We warn repeatedly against mixing up your training and testing data. This simple principle is, unfortunately, still often disregarded in scientific publications. [1]

§

There is an aspect that I really enjoyed about this whole process:

Before starting the book, I had already submitted two papers, neither of which is out already (even though, after some revisions, they are in accepted state). In the meanwhile, the book has been written, edited (only a few minor issues are still pending) and people have been able to buy parts of it for a few months now.

I have now a renewed confidence in the choice to stay in science (also because I moved from a place where things are completely absurd to a place where the work very well). But the delay in publications that is common in the life sciences is an emotional drag. In some cases, the bulk of the work was finished a few years before the paper is finally out.

Update (July 26 2013): Amazon is now shipping the book! I changed the wording above to reflect this.

[1] It is rare to see somebody just report training accuracy and claim their algorithm does well. In fact, I have never seen it in a recent paper. However, performing feature selection or parameter tuning on the whole data prior to cross-validating on the selected features with the tuned parameters is pretty common still today (there are other sins of evaluation too: “we used multiple parameters and report the best”). This leads to inflated results all around. One of the problems is that, if you do things correctly in this environment, you risk that reviewers of your work will say “looks great, but so-and-so got better results” because so-and-so tuned on the testing set and seems to have “beaten” you. (Yes, I’ve had this happen, multiple times; but that is a rant for another day.)