I got an email from Steve Landsburg with the subject line "krugman, me and you." I can't decide whether that counts as the sort of threesome I've always dreamt about...
I get daily emails from The Chronicle of Higher Education newsletter. Today's headline: "Academe Today: Professor Says His University Cares Little About Teaching." I had to stop for a second and confirm that I wasn't in fact reading The Onion.
I’ve written about this supposed dystopian nightmare before- the real problem isn’t that we can create a lot of stuff without taking up a lot of people’s time (that in itself sounds lovely, right?), the problem is that we as a society don’t know how to efficiently allocate resources without the price system, and, more importantly, we can’t seem to conceive of a mechanism for getting the inputs to that price system (i.e. dollars) to people that doesn’t involve trading said dollars for worker effort (i.e. labor). I’m not entirely convinced that we’re getting to the dire point of labor obsolescence any time soon (see here for some evidence of what ATMs have done to bank tellers, for instance), but I do think that we need to start getting potential solutions out into the public consciousness well ahead of time, since social norms and attitudes don’t change overnight (looking at you, gay marriage).
It’s worth noting that my statements aren’t a criticism of Reich himself except in that he appears to have taken up the journalistic hobby of burying the lede by putting this part at the end:
Our underlying problem won’t be the number of jobs. It will be – it already is — the allocation of income and wealth.
What to do?
“Redistribution” has become a bad word.
But the economy toward which we’re hurtling — in which more and more is generated by fewer and fewer people who reap almost all the rewards, leaving the rest of us without enough purchasing power – can’t function.
It may be that a redistribution of income and wealth from the rich owners of breakthrough technologies to the rest of us becomes the only means of making the future economy work.
I get the sense that people like Reich believe that the public will pay more attention if they are presented with the dystopian nightmare view of the situation- they may be right, but I’m curious as to how a “the world will be so awesome except for this distribution problem that we need to figure out” marketing angle would play instead. In any case, we do need to start thinking about a mindset change, since otherwise we run the risk of being that society that has people do busy work so that we have an excuse to give them money. We all know how much we “love” that at our jobs nowadays, so how about we start considering some alternatives, hm?
P.S. I definitely feel that some Player Piano fan fiction is in order here. Let’s get on that, shall we?
I can’t decide whether this is more or less ridiculous than negative interest rates for dealing with a liquidity trap…
(One nitpick: Technically, savings is what funds investment, so it doesn’t generally make sense to think that they move in opposite directions as suggested above. That said, if you define “saving” as putting money in banks or government securities and “investment” as private-sector investment, then the statement is pretty reasonable.) On a practical level, however, this can only work if people don’t recognize Janet Yellen, since they need to get scared and run away rather than be all “OMG it’s Janet Yellen, you’re so cool, can I take a selfie with you?” Fortunately (in this context and probably no other), Yellen’s celebrity doesn’t appear to be a limiting factor:
The survey showed 70% of those polled don’t know or aren’t sure who Ms. Yellen is. In contrast, just 1% had never heard of former president George W. Bush.
When it comes to the Federal Reserve as an institution, 42% said they had a neutral view, while 30% had a somewhat or very positive view and 20% had a somewhat or very negative view. Just 8% didn’t know the name or weren’t sure what it was.
It is useful for policy-making purposes to adjust monthly data to an expected annual rate. But the current method needs to be updated and based on something other than the weather.
Somewhere in this process I managed to get into a Twitter spat with Phil Izzo, who defended his employer (the WSJ) by pointing out that the article was written by a Harvard Business School professor and not one of the journal’s journalists. (For the record, I generally think that Phil is a good dude.) What started this was a comment that I made about how journalists should have subject matter expertise, so I responded by pointing out that editors are journalists too. I don’t care if something is in the “opinion” section or not, it’s gotta be vetted for the crazy sauce. In this case, even the author’s job title wasn’t vetted, since he’s been retired for a while as far as I can tell. (Emeritus, people, come on.) Otherwise, people get mislead by the parts of the “opinion” pieces that aren’t actually opinions, and five thirty eight has to spend its time trying to combat the absurdity:
This is so wrong that it’s hard to know where to begin. First, seasonal adjustment isn’t entirely, or even primarily, about the weather. It’s about accounting for recurring patterns, whatever they may be. Tax preparation firms hire lots of people every spring and then lay them off after April 15. Landscaping firms employ far more people in the summer than in the winter. Automakers shut down their factories each summer to change over to the new model year.
But Mills doesn’t make that argument. Instead, he writes: “The [Bureau of Labor Statistics] should report both seasonally adjusted and actual figures each month.” But of course, the BLS already does this — which Mills knows, because that’s where he gets his “2.7 million jobs” figure from the first paragraph of his story.
I figured I should do my part too, so I answered in the form of a video about the jobs report and a very simple seasonal adjustment example, and some fun new visual tricks:
I think I kind of like looking like I’m trapped in numbers jail.
Pop quiz: What do Larry Summers and John Oliver have in common? Are they both British? No…Are they both funny? Not intentionally…Do they both make awkward statements about women? Actually, I’m not sure, but I hope not…Are they both actively advocating for infrastructure spending? Ding ding ding!
Larry Summers recently said something startling: “At this moment . . . the share of public investment in GDP, adjusting for depreciation, so that’s net share, is zero. Zero. We’re not net investing at all, nor is Western Europe,” he told a Princeton University audience.
In other words, total federal, state, and local government investment is enough to cover only the amount of wear and tear on bridges, roads, airports, rails, and pipes. “Can that possibly make sense?” asked the former Treasury secretary, who has been campaigning for more government spending on infrastructure.
Well, technically it could make sense- if spending were such that everything was being maintained properly but new infrastructure wasn’t being built, then at least it would be true that things aren’t getting worse. What we see instead, however, is that there are structures that are actively in disrepair and without the funding to return them to their former (lack of) glory. Summers’ argument is that increased debt burdens are bad for future generations, but so is crappy infrastructure, and he asserts that low-interest-rate environments (like we currently have) are a relatively good time to borrow to undertake such projects.
Overall, I agree- as long as we’re not talking about “that weird old bridge that no one uses anyway” or something like that, then infrastructure at the very least needs to be maintained properly. In terms of leaving burdens for future generations, maintaining now is particularly important in cases where letting structures depreciate makes them disproportionately more expensive to fix later. (You know, in the same way that paying for healthy food now is cheaper than paying for diabetes later.) That said, I’m a little frustrated that people just now seem to be getting on the infrastructure bandwagon, since you know when’s an even better time to undertake infrastructure projects? When you have a bunch of unemployed people sitting on the couch- it’s more efficient to pull people off the couch to repair bridges than it is to pull them from other productive work. (This isn’t even a Keynesian stimulus argument, just an opportunity cost one.)
I get it, infrastructure is easy to take for granted- it’s just always there and passive, like that old boyfr…er, pair of slippers. We don’t pay for it directly or use it exclusively, so thinking about its upkeep is far more abstract than contemplating condo renovations or whatnot. It’s important to keep in mind, however, that we have the luxury of taking infrastructure for granted only because past generations made it a priority to put it there, so it’snot really feasible to take it for granted forever. And believe me, you notice when it’s not there, whether it be potholes in roads, construction projects that get in the way and take way too much time, poorly functioning public transport (can you tell I live in Boston?)…or, you know, this:
If that didn’t get your attention, I don’t know what will…except perhaps this, featuring Ed Norton:
I would like to think that we’re not children and therefore don’t need things to be sexy in order to pay attention, but one glance at my site’s name would suggest that I’m more cynical than that.
In related news, I got a care package from John Oliver’s show that included a magic 8-ball that knows more about me than I think I am comfortable with:
There was also a USB drive include in the package, and of course the drive had a video on it…in the video, Oliver puts Meryl Streep and Thomas Piketty in the outer “neither” space of the “has an active social media presence” and “watches Last Week Tonight”- while the empiricalevidence regarding Piketty is in Oliver’s favor on the social media front, I really want to think that Piketty sits on the couch with his…well, French Cheetos (Les Cheez Doodles?) and Ben & Jerry’s (Francois & Pierre’s?) and watches all of the parody news shows to see how often they mention him. (for the record, that is not a judgment on you, Tommy boy, but I will judge your book’s social media team for giving up after four tweets…FOUR. I think my cat has tweeted more than that by walking on the keyboard.)
CHAIRMAN BERNANKE. …Let me turn now to the economic situation. Boy, I think it has been a while since we were three and a half hours into the meeting before we got to the staff forecast.
MR. STOCKTON. The GDP is a little smaller than it was at the start of the meeting.
Or perhaps you prefer this one…
MS. YELLEN. …The residential housing sector has now shrunk so much that the only real assurance that it will ever stabilize seems to be the fact that construction spending cannot go negative. This is just about the only zero lower bound that is working on our side. [Laughter]
I don’t care what anyone says, Yellen is totally going on the list for next year’s humor session. =P
Let’s begin by working through a situation that has been quite popular recently…suppose that you show the following photo to 10 of your friends:
Now, let’s say that 8 of your friends saw the dress as blue and black and 2 saw it as white and gold. You’d probably feel pretty comfortable asserting that, if you were to poll more people, more would see the dress as blue and black than white and gold. What about if your sample had 6 reporting blue and black and 4 reporting white and gold? You’d probably think something along the lines of “yeah, I know the result isn’t split 50/50, but it’s not that weird to get a little away from 50/50 in a sample even when the population is divided 50/50.” Neither of these statements is particularly unreasonable…but where do you draw the line? What if your sample reports 7 for blue and black and 3 for white and gold?
This is what tests of statistical significance are supposed to help out with.* Wouldn’t it be nice to know how likely it would be that your sample would give a 7-3 vote if the population really were split 50/50? This is what a statistical “p-value” tells us. If that value is sufficiently small, we say to ourselves “self, you know what? It’s pretty darn unlikely that I would see what I’m seeing from my sample if the population were really split 50/50 on this issue- maybe it’s time to entertain the notion that more people think the dress is blue and black then think it is white and gold.” (In reality, I think the white/gold camp wins out, but this is my story, so just go with it.) This is what statistical hypothesis testing does.
Sounds pretty compelling, right? If so, then I hope for your sake that you aren’t a social psych researcher, since the Journal of Basic and Applied Social Psychology decided to ban statistical significance testing in all of th articles that it publishes. (For you Bayesians out there, they aren’t too happy with you either but are willing to consider your analyses on a case by case basis.) Okay, I get that the generally accepted practice of considering a finding with a p-value of 0.05 or less as significant and everything else garbage isn’t without it’s problems, most obviously that researchers have incentives to finagle their analyses to sneak in under this threshold, but what on earth are researchers supposed to do instead? (i.e. what is the counterfactual to statistical hypothesis testing? So meta.)
I have some suggestions:
Just look at your data- if your graph traces out the shape of an animal, count it as meaningful. Like this:
Just wave your hands and talk forcefully until people take your result seriously. (This seems to have worked for macroeconomists for a while now.)
Ask your pets- right paw = significant, left paw = not significant. (If you have a bird, you could use the result to line the cage and…well, you figure it out, since I can’t decide if bird crap indicates significance or the lack thereof.)
The downside I suppose is that none of these approaches really have the gravitas normally associated with scientific rigor, so I’m at a bit of a loss. Seriously though, I don’t understand what researchers are supposed to do instead- the article mentions something about descriptive statistics, but the point of the statistical analysis that I referred to above is to give some context as to whether differences in descriptive statistics are large enough to be worth paying attention to.
As I said, statistical analysis is not without its flaws, but there are a number of far less controversial and likely more productive steps that the journal could have taken:
Pre-registration of experimental trials- if there is a record of what was tried experimentally, then it’s more clear how many things were tried in order to get a result that looks “good.” (The American Economic Association has started doing this, but it’s not mandatory yet.)
Publication of p-values and confidence intervals- rather than just declare something as “significant” because it meets some arbitrary p-value threshold (results are often simply given a number of asterisks to indicate significance), explicitly show the likelihood that your result is due to random chance and give a range for where or error bars for point estimate results.
Publication of negative results- if journals published papers where the “null hypothesis” (i.e. the uninteresting hypothesis that the researchers are looking to refute) can’t be rejected, then researchers would have less of an incentive to fiddle with their analyses to make it look like they meet the threshold of statistical significance. This, coupled with pre-registration of experiments, would cut down on what is known as “publication bias,” or the tendency for readers to see only the studies that showed the result that researchers were looking for (while the other studies get put in the circular file or whatever).
I guess this makes me thankful that I’m an economist, since if I ever write a paper that reads “well, my cat and I think this result looks pretty good, how about you?” it will be because I wanted to and not because I had to.
* Yes, I know that this doesn’t have to do with causality specifically, but this same method is used for analyses that attempt to tease out cause and effect.