Saturday, March 17, 2007

Ahmad Dhani the researcher

Perhaps the biggest challenge for a researcher is to show causality – that X is causing Y. Most of the time, at best we can only claim that the two are correlated. Even if we do some fancy econometric techniques, we need to be careful in making claims about the coefficients. Usually, we phrase our conclusion as “higher/lower X is associated with higher/lower Y.” We tend to avoid saying things like “higher/lower X is causing higher/lower Y” due to reverse causality and omitted variable biases.

(Even if we just claim an association, we also need to be careful. Otherwise, we’d say things like "people born under the astrological sign of Leo are 15% more likely to be admitted to hospital with gastric bleeding than those born under the other 11 signs," as this article quoted.)

If" we want to claim causality, we need to think about counterfactual: “what would have happened to Y if X had not existed or happened.” Think about X is a policy or any kind of intervention, and Y is the outcome. Let’s say that X is classical music, and Y is baby’s IQ. Some people said that classical music increases (that is, causing in a positive way) baby’s intellectuality. Truly, I don’t know how true the research was; perhaps it was, perhaps it wasn't. The point is, all the researchers need is to provide a counterfactual: that babies who did not listen to classical music have lower IQ scores that those who have been exposed to classical music.

But simply comparing two babies (or two populations of babies) will not solve the problem. We may think that parents’ taste of music is correlated with wealth. Hence babies who were exposed to classical music are more likely to come from wealthier parents who can feed them with better nutrition or supply them with more IQ-stimulating games. This is what we call ‘omitted variable bias.’

A perfect research would be making a clone of a newborn baby, treat them equally the same except for the exposure to classical music. Then measure their IQ after several years. Note that a research involving human cloning will not pass the ethical committee, at least until now. The next best thing we can do is to perform the so-called randomized experiment (like these guys, as well as this friend of us have been doing).

But Dhani (the musician, not our friend Dhani the real researcher) had a different idea. In his latest appearance in an infotainment (indeed, watching infotainment is better than listening to football commentator during the match break. By the way, Dhani appeared with his three boys but without Maia), he said that his boys must like rock music, because it is a ‘man’s music.’ His conclusion was based on a ‘study’ over some ‘sample’ – his so-called ‘sissy, queer’ (bences-bences) friends. His ‘sample,’ he claimed, do not like rock music. So look at the result. (The presenter asked Dhani’s kids whether they listen to Dave Koz, which answer was ‘no.’ Dhani then added, “Of course they don’t listen to him – they like Iron Maiden instead…”).

I enjoy Dhani’s music. I respect him as a musician. I never like his male-chauvinist antics and remarks. And definitely he is not a researcher.

7 comments:

  1. How can we apply randomization if the object himself might have a random behaviour? Remember the discussion of preference and choice, as people may change their behaviour.

    ReplyDelete
  2. suppose we’re doing it right (and there are a lot of tests before calling it ‘right’), we will be comparing the average outcome of two groups: the ones who received intervention, and the other who did not. True, people’s behavior is random, meaning that the outcome of the treatment (classical or rock music) on individual will also be random. But if we have carefully defined the treatment and control groups, we would not expect the random process systematically increases the average IQ for one group compared to the other.

    That says, a carefully run randomized experiment will isolate the ‘other factors’ including the random process. If the observed difference of average IQ between the two groups is statistically different from zero, then we can conclude that the intervention is causing an impact. Otherwise, the experiment would help us concluding that there is no evidence about causality.

    ReplyDelete
  3. well written pe..

    However, as Arya mentioned, social (natural/randomize) experiment is very costly. It is usually used in program evaluation. So, many researcher relies mostly on the available data sets.

    I just wonder :

    1. It is true, that reverse causality and and any other biased could lead into inconsistency of estimation.

    However, can we use the strutural econometrics to eliminate the reverse causality argument?

    I think this structural econometrics, recently has been developed in many field in economics. (empirical studies in search-in-labor-market and IO are amongst of them). If this is true, than we only have to face the omiited variable bias, selectivity bias and endogeneity issue. With these three problems, can we just do the IV or FE (thus get the consistent estimation and interpreted is as the causal effect)?


    2. If the treatment is random, ( the treatment is independent of individual characteristics), can we just use the results of the treated as the proxy of the counterfactual of the treated?

    I just wonder, if this is not true and treatment is endogent, then should we do more complex method(may be matching model-such as propensity score). Is it true? If this is true, again we need sufficient data. and thus money to collect the data.

    Thx in advanced

    ReplyDelete
  4. Ado -- yes, true randomized experiments are costly, difficult, and sometimes technically and politicallyimpossible. (Imagine if you arbitraily give a vaccine to one household and deny the other household). Even if you can do the first step, no guarantee that no spillovers or other effects to the treatment and control groups.

    As you said, most of the time we can only rely on the available data set. As an alternative method, we can do approaches like 'natural randomization.' Or, again as you said, instrumental variable or fixed/random effects regressions.

    There are constraints too with these methods. I will discuss them in separate postings. Thanks!

    ReplyDelete
  5. thanks for mentioning my name. i should start bookmarks any google/yahoo page that mention my name... :))

    btw, it was nice seeing some of the cafesalembaian people today. we should hang out more...

    dapet salam dari peneliti CSIS... APEEEE DEEEHHH!!!!

    ReplyDelete
  6. dhani : touche abis, maaaan ;-) kekekekke

    ReplyDelete
  7. He is such a lovely wanker! His comments shows how ignorant his is. I'd like to have a discussion with him or even bash his thick head with load of books of sexuality, identities, and representation.

    ReplyDelete