Pages

Subscribe Twitter Twitter

Thursday, February 26, 2015

Gender Quotas for Leadership Positions

So I'm doing this guest lecture for undergrads in two weeks, on direct evidence of gender discrimination. If I'm nervous, it's because I'm no expert. Then there's the issue that I'm a guy. In the past, I once joked to a friend that when exercising, I ran faster this one time when a woman ran the same route. I said that I ran faster because I did not want to get beaten by a woman. I thought it was a joke. I got a proper scolding from a female coworker who had overheard.

But the professor of the class thinks that because I had recently done an audit study, and many studies of discrimination are based on audit studies, then that qualifies me. So I'm taking this as an opportunity to learn.

There's actually been a lot lately in the news on gender discrimination. Take this visualization exercise, for instance, making its rounds on social media from Ben Schmidt showing results of over 14 million reviews at RateMyProfessor.com. Male professors apparently are more likely to be rated as "smart" or "brilliant" compared to their female counterparts. If this is not satisfying evidence (well, maybe male and female teachers just truly have different characteristics), take this similar piece by MacNell, Driscoll and Hunt. They take it further to show students rate their professors higher when they think they are male, even when they are female. They hid the gender identities of professors in an online course. Then of course there are the older, more classic, studies. Goldin and Rouse document how in the 1970s and 80s, the adoption of blind auditions concealing candidates' identity, were significantly behind the rise of female musicians in orchestras. An audit study by Neumark, finds that females are discriminated against in job postings by restaurants. He sent undergrads to pose as fictitious job applicants to see if, holding other else constant, females got treated differently than males.

Strangely, while the literature delves much into establishing the existence of gender discrimination, I haven't found much on effective policies that are able to counter it. An exception is this work by Beaman et al. who explore the effect of gender quotas in leadership positions in India. The natural experiment they exploit is as fine as any: in West Bengal, since 1998, a randomly selected one third of village council positions are mandated to be reserved for women. Thus, in select villages, only women can run for election, while in others, men could, and they often won. Now, you can imagine how this policy could have gone horribly: quotas could have theoretically precipitated a backlash against women leaders in later years if social norms are unchangeable. But it did not. In villages which were randomly mandated in the past to have female leaders, women were more likely to stand for, and win elected positions compared to villages which had never been mandated. And voters appear to have reduced their bias because of the policy. In a later year, villagers were asked to evaluate speeches by hypothetical leaders where the leader's gender was the only thing manipulated across respondents. Men usually had bias against women; they rated leadership quality in speeches delivered by men to be higher. But astonishingly, this bias disappears in villages that had been chosen to have a gender quota in the past. And so beliefs were easy to change.

That's all I have for now, but I expect to learn more as I prepare. The task is proving more interesting than I had expected and I am getting more eager to present. Please let me know if I missed anything.

Monday, February 16, 2015

Breakfast with Hal Varian

PhD students had breakfast this morning with Hal Varian. Yes, author of that ubiquitous microeconomics textbook. He was formerly a professor here at the university for 17 years. Now, he is chief economist at Google.

You can imagine how we tried our best to wrestle out of him secret projects that the company is working on, but to no avail. However, he did leave us with some anecdotes about work in Silicon Valley as an economist, the early days of the internet (in which Ann Arbor actually played an important part), and mechanism design.

Below are paraphrased points that I remember most from the conversation. And because I am doing this hastily without any written notes, I apologize for anything that I misremember.

On the value of economists in Google: 
There was a time when usage of Google's search engine surged among users and then suddenly dropped. As people within the company started to panic, I suggested looking at the numbers in log. And in logs it showed, indeed, a 5% drop in usage. But it also showed the drop occurred seasonally every summer. It only appeared big at that time because the drop had followed a previous dramatic increase. Who would have thought, something as simple as showing change in log terms, something routinely taught in econometrics, would have had a practical, real world application.
On the most pleasant surprise of transitioning from academe to industry: 
Well, we have free food in Google.
On big data and causality: 
You all know about big data. But big data can often only show association, not causal relationships. To learn in Google, at any point in time, we are running some 1000 experiments, trying to assess our ranking algorithm, etc...
And this was the question that I personally asked, on what kind of economist Google would hire. Is it more valuable to be well-versed in theory, say in mechanism design or auctions, or would it be better in this day to have a more applied skill set, knowing how to conduct experiments, and understand how to analyze data in a causal way?
I would have to say the latter. You see, you want to have a scarce and complementary skill set for a resource today that is plentiful and cheap. And right now, that resource is big data. In other words, you do not want to be a right shoe in a world where there are many right shoes. You want to be a left shoe. Not to say that knowing theory is unimportant. You have to be asking the right questions as well, and theory will give you the right questions to ask.

Wednesday, February 11, 2015

The Causal Effect

Don Rubin, one of the most influential statisticians of our time, visited today to give a talk on causal inference. For his introduction, I liked the description they gave for him: "He currently has as many citations as 1.5 times the entire population of the city of Cambridge, MA."

No surprise there. Rubin's causal model is of course a centerpiece of program evaluation. What is the causal effect of any kind of program or policy, say for instance, the provision of microfinance on poor households? No serious evaluator can answer this question without at least considering Rubin's model.

It was only in graduate school, however, that I got introduced but I wish I knew about the insights as early as my undergrad days when I was perhaps too easily swayed by shoddy argument. So for those unfamiliar, here is my quick attempt at explaining the causal model in layman's terms:

1. It is difficult to measure the effect of a program or policy. In fact, more difficult than you think it is. My favorite example is this: to demonstrate the effect of hospitals on health, you wouldn't merely compare the health of patients versus non-patients. You'll wrongly conclude that hospitals make patients worse off because patients are sicker than people who are not in hospitals.

2. The fundamental problem is missing data. You observe the outcome of hospital patients, but not their outcome in an alternate universe if they had not been to a hospital. You observe the outcome of microfinance recipients, but not how they would have been, had they not received the program. But without knowing what would have been, you cannot know what the effect of the program is. And this missing data problem is essentially unsolvable. You cannot go back in time, and prevent microfinance recipients from getting loans and observe what would have happened to them

3. BUT... and here's the key... there are ways to estimate what would have been. Doing a randomized experiment is one of the best ways to do it: randomize who gets treated (provide them for example with microfinance) and who does not get treated. Because of randomization, the control group will be the same in every way as the treatment group except for the program. So the control group mimics what would have been to the treatment group had they not received the program. Taking the difference in outcomes of the treated and control group, therefore, will be an accurate estimate of the causal effect of the program.

Obvious? Ah, but "science is made up of so many things that appear obvious after they are explained," a quote from Dune. But even then, I am not quite sure that many people understand the basic insight. Still, people argue that the effect of presidents can be gleaned from how GDP did during their time, but this is not necessarily a causal relationship. What would have been? Diet supplements are still marketed using ads that show before and after. This is often misleading.

In economics, thinking about causality has been there for a while. But I believe it is only recently, starting in the 1990s (much later than medical science used experiments to investigate the effects of medical drugs) that people ramped up running randomized experiments with NGOs, even with governments, to accurately test theories and to measure program impacts. There has been an explosion of empirical work using experimental and quasi-experimental variation. And I think the field owes an important part to the Rubin model.

Monday, February 2, 2015

169 targets and 17 goals

Been reading recently about the Sustainable Development Goals (SDGs) which replace the Millennium Development Goals (MDGs) as the world's targets for the next 15 years. Apparently, as it stands, there are 17 goals and 169 targets which world leaders are set to meet and agree upon this September, with only minor "tweaks" left. The MDGs had only 8 goals and 18 targets. Is the world prioritizing nothing by prioritizing everything?

Some partially conflicting views recently coming from experts from a favorite think tank, the Center for Global Development:

First there's Nancy Birdsall and Anna Diofasi who think that the goals are in fact not too many:
Rather, their multitude reflects a more inclusive process in the formulation of the post-2015 development agenda. They recognize that development today needs to be less about what poor countries ought to do to catch up and more about what both rich and poor countries can do together to address global challenges. In today’s world of climate change, epidemics, and cross-border terrorism, it is more evident than ever that the actions of those at one side of the world affect the lives of those at the other.
Then there's Charles Kenny who is more pessimistic and I think I agree with more:
Imagine for a moment that the 169 targets were agreed by the full UN assembly as the draft stands. We would be setting ourselves the goal of achieving phenomenal global progress by 2030, including eliminating global poverty, malnutrition, HIV/Aids, malaria, and all violence against women; providing universal secondary education and health care as well as adequate housing, water, sanitation, energy and communications for all. It would be hard to write a more generous wishlist for Santa Claus but how will that make the world a better place? To put it in development jargon, what the sustainable development goals lack is a theory of change.
... the draft goals have ended up a laundry list of the sadly impossible (for example, the target to “halt the loss of all biodiversity”), practically immeasurable (“respect cultural diversity”) and simply unfathomable (“forge unity in diversity”).
Now, I am not against forging "unity in diversity" (LOL on that by the way) but I agree there is some compelling argument in keeping the list small and at least measurable. All I know from my yearly resolution lists is that the more I have, the less I get done. Resources are scarce, especially in poorer countries. Plus, it might be hard to keep governments accountable when you have 1, 6, 169 things to hold them accountable for.

Will this agreement be more symbolic rather than realistic? Here's hoping it will not.