Thursday Links

Lazy August links:

  1. Ray, the parking robot:

    [T]he system could work fine without any human oversight, but the airport is having an employee on hand in case travelers have questions about how to use the new option.

  2. OKCupid experiments on human being! But because they don’t publish in scientific journals and do it just for fun and profit, that’s totally OK (unlike the facebook study, which was unethical because they published).

Friday links

  1. A last link on the facebook saga makes the observation that if facebook just includes some randomness in all decisions of what to show on the stream (which they might do just to improve their service), then any study which leverages this is a randomized observational study. And observational studies have much reduced ethical thresholds.

  2. An excellent summary of what is known about SSRIs. I really liked both the conversion of effect size to weight loss numbers and the discussion of how, whilst people can agree on the data, it gets very hairy when you start to use words such as “moderate depression” or “severe depression” to describe different numeric results.

  3. Some people who make more money than I do, can activate your DNA. Obviously, they are skeptical of ENCODE claims. This guy’s biography is perfect:

    Toby Alexander, is a coach, speaker, seminar leader and author. He is a leading expert in a variety of fields including energy medicine, emotional mastery, peak mental strategies for optimal performance, 15th dimensional physics, futures and forex trading, SAP, remote viewing, and distant healing.

  4. Next week, I’ll be in Lisbon for LxMLS 2014. Contact me if you want to get in touch there.

Two more links about the facebook study

From Wired,Everything You Need to Know About Facebook’s Controversial Emotion Experiment, making the argument that an IRB reviewing the study would likely have approved it:

The [hypothetical] IRB might plausibly have decided that since the subjects’ environments, like those of all Facebook users, are constantly being manipulated by Facebook, the study’s risks were no greater than what the subjects experience in daily life as regular Facebook users, and so the study posed no more than “minimal risk” to them.

Also, see here:

[C]ritics of the Facebook experiment should at least be aware that we are talking about a mode of research that existed long before Facebook, and that federal ethics advisors and regulators specifically decided that it should proceed.

Facebook was probably ethically wrong, but morally OK in studying user emotions

Facebook did a study of how its users react to different sorts of stories in their feeds, namely by looking at emotional words in posts and giving different users different mixes of posts. Turns out that there is a tiny but measurable effect in what people write afterwards. Several people were immediately outraged that facebook would do a thing like this and publish it.

§

Much of the discussion between scientists centered around the fact that Facebook did not get Institutional Review Board (IRB) approval and they probably got US public money so according to the law, they needed to go through an IRB process. In fact, yes, if you are working with human subjects, then you need approval from one of these IRBs to conduct your research.

This whole thing reads to me incredibly legalistic (even more because Cornell’s IRB might have given them a yes, but it’s not clear whether all the protocols were followed correctly). At one extreme, it even felt a bit like “we scientists in academia have to jump through all sorts of bureaucratic hoops, why shouldn’t others do the same? Not fair!”

§

Even if facebook is at fault for not following the rules, it’s a different point from whether what they did was wrong. Sure, if they did not follow some regulation related to their Federal research funding, maybe the funding agency should cut their funding or give them a warning to improve their practices or what not. But was it morally wrong? Here, I just don’t see a strong case for the prosecution.

Reasoning by etymology is fallacious, but, at these times, the relationship between ethics and etiquette just jumps at me. When I tell my daughter that she needs to ask politely, she excitedly asks in a nice voice with a please at the end. She knows that now I’ve already given in, the rest is just procedural, follow the form, the etiquette, the ethics protocol. Is the problem that Facebook did not ask please?

§

This sort of study is standard for private companies, except that it’s normally done to increase profits not knowledge. Any company tries out new things on its users, from the corner coffeeshop owner who asks me what I think of the new Ethiopian brew, to large-scale A/B testing done by internet companies. A/B testing is when an organization tries randomly two versions of its website and sees which one works best in whatever metric (clicks, purchases, or donations being the common goals).

For a company the size of Facebook, several of these experiments will be running at any given time. Do people share more if photos of friends are shown above or below the text? Will this cause them to share more photos themselves? To “Like” them more?

Now, this is rarely phrased as “manipulating users emotions”, but, really, what else is it? This whole brouhaha started with another commercial entity manipulating its users’ emotions to sell more advertisement, namely the press writing inflammatory stories about the facebook study [1].

§

If a company does this all the time to increase profits, what’s the harm done to the human subjects if they do it and publish it?

Hilary Mason wrote that cultures are not consistent, which is a fine conservative sentiment, but it is not enough to just say “this is how it is, take it or leave it.” The inconsistencies should at least give us some pause and make us question our emotional certainties.

§

Following a sort of Goldwin’s Law for ethics, the Tuskegee Syphilis Experiment was immediately mentioned by several people (this is an infamous experiment when black Syphilis patients were left untreated “to see what happened”). I don’t see, however, how it remotely applies. Even forgetting for a minute that syphilis is much worse than a small (but measurable) impact in use of emotionally laden words; what I see as the fundamental difference is that mistreating syphilis patients is bad (illegal even) outside the context of a scientific study. It is not enough to say “it’s OK, because it’s for science”. Or to put it in another way, individual rights cannot be trampled just for scientific benefits. However, what facebook is doing is perfectly fine, except if it is for science. This is fundamentally different from the problems of misusing individuals for the greater good.

§

I just cannot shake the idea that facebook was fine until they published their results through the traditional scientific process. That was their mistake.

Facebook probably learned its lesson and will no longer attempt to publish any of its studies. They will still do them internally to understand their business better and make more money, just not publish them. This knowledge will now spread through word of mouth and at tech conferences without making it into the scientific literature [2].

This is a loss.

[1] The paper has actually been out for a month. PNAS-reading scientists did not seem to care to o much until they were riled up by press and social media.
[2] Also, it won’t be peer reviewed, but, hey, it’s psychology, their publication standards are way lower than whatever rule facebook uses to decide to change the font on its website (because facebook’s website font matters more than academic psychology).
[2] Also, it won’t be peer reviewed, but, hey, it’s psychology, their publication standards are way lower than whatever rule facebook uses to decide to change the font on its website (because facebook’s website font matters more than academic psychology).