I want you to think back for a moment to those science projects you did when you were a kid…you know, you had the question that you wanted to answer, the control and the experimental group, etc. etc. Do you happen to remember what the point of all of that was? The general idea is that there is only one difference between the control and the experimental group, so if you see a difference in outcome you can pretty safely conclude that it’s because of that one difference between the groups. For example, elementary-school econgirl did a project investigating what shape of paper airplane flies the furthest. It would have been scientifically invalid for me to make the airplanes out of different types of paper or to fly them in different places or have different people throw them or whatever, so I was very careful to keep everything constant except for airplane shape.
The scientific method is not relevant only to little kids and science fairs, however, and it is used extensively in the “real” sciences. (By “real” I mean, physics, chemistry, etc.) Social sciences (read, economics), on the other hand, rely more on observational data and natural experiments. Given the above implication that controlled experiments are superior in terms of scientific discovery, why don’t economists, sociologists, etc. use them more often?
Probably because of problems like this. From Gawker:
In the name of getting good statistical data, New York City is randomly denying poor people access to a program designed to stave off homelessness. If those unlucky people become homeless, you know it works! Oh… is that frowned upon?
The program in question is called Homebase. It gives rental assistance, job training, and other benefits to people facing “immediate housing problems that could result in becoming homeless.” People in imminent danger of being on the street. The poorest of the poor, next to actual homeless people.
To find out whether or not Homebase really works, the city is conducting a study. As the NYT puts it, “Half of the test subjects – people who are behind on rent and in danger of being evicted – are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.”
(The original NYT article is here, but I think that the Gawker commentary gives some insight into popular opinion on the matter.)
Here’s the thing about the world- resources are limited (yes, even resources for warm fuzzy things like helping almost-homeless people), and society is better off if those resources are put towards things that actually work and not wasted on things that don’t. I think that most people would agree with that, but then they get pretty squeamish when researchers actually try to figure out what works and what doesn’t. My suspicion is that not everyone remembers their elementary-school science projects and thus doesn’t understand why the researchers can’t just help everyone and see what happens. Luckily, those in charge of this experiment seem to be familiar with the concept of selection bias:
But Seth Diamond, commissioner of the Homeless Services Department, said that just because 90 percent of the families helped by Homebase stayed out of shelters did not mean it was Homebase that kept families in their homes. People who sought out Homebase might be resourceful to begin with, he said, and adept at patching together various means of housing help.
In other words, yes, 90 percent of people who got help stayed off of the streets, but it’s entirely possible that they would’ve figured something else out had they not gotten this particular form of assistance. Therefore, the researchers can’t conclude based on this evidence that the program actually had the effect that it was going for. (See here for more on correlation versus causation.)
The people involved with this project seem to be trying to do a good thing but are getting caught in a bit of an unfair PR nightmare. Granted, there would likely be less grumbling if this were a new program and only some people were being offered it, whereas the reality is that it’s an existing program that is being randomly taken away from some people. Entitlement aside, the distinction isn’t really important, but it’s not always easy to get people to see that. It’s also relevant to note that the program has a limited budget, and its spokespeople specifically point out that not everyone who applies gets assistance anyway. If those objecting realized that the same number of people were getting assistance, but those people were getting it via a lottery outcome rather than because they applied first, would the objections be as loud? See, there are lots of things that I am curious about.
Most perplexing to me is the fact that no one seems to have their panties all in a twist about clinical drug trials, when they employ exactly the same methodologies as the study described above. I mean, how pissed off would you be if you were the cancer patient who got the placebo for the new treatment that turned out to work great for those people who actually got it? That said, I’m not holding my breath in hopes of seeing a New York Times feature on the issue.
In case you’re curious, economists are getting onto the “hey, let’s see if all this stuff we’re doing in developing countries actually works” boat and using data to design more effective and efficient humanitarian programs. For example, a classmate of mine is involved with TamTam, an organization that distributes malaria nets to needy women in Africa. Did you know that a $7 malaria net can reduce the risk of childhood mortality by 20 percent? Did you know that women can be encouraged to go to prenatal checkups if the distribution of the nets is paired with such visits? The people at TamTam know these things because they’ve gathered data in such a way that they can tease out cause and effect.
I don’t know about you, but I would rather support organizations that take specific care to make sure that the projects they undertake have the biggest possible impact. Unfortunately, it’s often impossible to tell what will have the biggest impact if we always give everyone what they think they want. Science is a bitch sometimes.