Social Networks and the Endurance of Economic Injury

Models
Killing two birds with one paper

My last fall semester exam was for Social Economics class. Reading through packets of model summaries, I set out to determine which model — besides the obviously lovable Coate and Loury (1993) — I would most like to understand, remember, and explain. That is, I picked a paper to write about here… which I could also use in my intellectual battle with those pesky blue books.

It was in the middle of the lecture packet on “Intergenerational Mobility” that I found Bowles and Sethi (2006). This paper illustrates how magically ridding the world of discrimination (i.e., saying “assume no discrimination” as an economist) doesn’t necessarily lead to perfect convergence of group economic outcomes. In other words, even with zero discrimination, group differences in economic success can still persist across generations. Why is that? Because social networks matter for economic outcomes, and networks are often segregated by group identity. In the authors’ words,

“Group differences in economic success may persist across generations in the absence of discrimination against the less affluent group because racial segregation of friendship networks, mentoring relationships, neighborhoods, workplaces and schools places the less affluent group at a disadvantage in acquiring the things — contacts, information, cognitive skills, behavioral attributes — that contribute to economic success.”

Social networks are undeniably important in determining individuals’ economic outcomes. As such, building social network structures into models of human capital accumulation improves realism and allows for intriguing intergenerational theoretical results.

Bowles and Sethi (2006) appeals to me because the model formalizes dynamics touched on in many conversations about the long-term impacts of discriminatory practices. In this post I will lay out the model mechanics, explain proofs of the key results, and showcase a graph the authors use to visually illustrate their theoretical results.

Model mechanics

The paper motivates the model with a few words on Brown v. Board and the black-white wage gap. “Many hoped that the demise of legally enforced segregation and discrimination against African Americans during the 1950s and 1960s coupled with the apparent reduction in racial prejudice among whites would provide an environment in which significant social and economic racial disparities would not persist.” Despite initial convergence from the 1950’s to the 1970’s, the gap has persisted. There are many reasons why this could be the case, continued practices of discrimination included. Bowles and Sethi use the following model to illustrate how such gaps could endure even absent discrimination.

In said model, a person is born into one of two groups — black or white — and lives for two periods. In the first period of life, she makes a decision about whether or not to acquire human capital and become ‘skilled.’ This is a simple binary choice. (She either becomes educated/trained or she does nothing.) In the second period, she is paid a wage based on her previous choice. If she didn’t acquire human capital (and thus isn’t skilled), she is paid a wage of 0. If she did (and thus is skilled), she is paid a wage of h. In effect, the marginal benefit of human capital acquisition is h for all agents.

For the sake of simplicity, the model first assumes all people have the same ability. As such, an individual’s cost of human capital acquisition is solely dependent on the level of human capital in that person’s social network. Define network capital, q, as the fraction of agents in the network who chose to acquire human capital and are skilled. The key assumption in the model is that given the cost function c(q), c'(q)<0. In words, the higher the fraction of skilled people in a person’s network, the less costly it is for the person to become skilled. Ie, acquiring training is less costly when your network can connect you with opportunities and provide you with relevant information.

As per usual, agents choose to become skilled if marginal benefit exceeds marginal cost. Assume c(0)>h>c(1) — that is, the cost of becoming skilled when no one in your network is skilled (q=0) is higher than the benefit of becoming skilled (h), but the cost when everyone is skilled (q=1) is lower than the benefit (h). In effect, there exists a unique threshold level q* such that c(q*)=h. The agent’s decision rule is then: for any q>q’, the agent chooses to becomes skilled & for any q<q’, the agent does not. (I’ll ignore indifference throughout.)

While the decision rule is clear, how are social networks (and thus q‘s) formed? We assume the population shares for B (black) and W (white) groups are x and 1-x, respectively. Moreover, agents born into the model in period t+1 have a large number of ties to those born in t. With probability p in [0,1] an associate is from same group (B or W), but with probability (1-p) an associate is randomly picked from the general population of agents (could be either group). As such, the parameter p is the degree of “in-group bias” or segregation. Assume the parameter is the same for both groups. Therefore, the probability that: a black agent’s connection is also black is p+(1-p)x, a white agent’s connection is also white is  p+(1-p)(1-x), a black agent’s connection is white is (1-p)(1-x), and a white agent’s connection is black is (1-p)x.

The network capital in t+1 depends on the mechanical formation of the agent’s group and human capital accumulation decisions made by black and white agents born in time t (represented by sB(t) and sW(t), respectively):

qB(t+1)=[p+(1-p)x] * sB(t) + [(1-p)(1-x)] * sW(t) 

qW(t+1)=[(1-p)x] * sB(t) + [p+(1-p)(1-x)] * sW(t) 

The above equations show that (for both groups) the fraction of connections in an agent’s network (born in t+1) who are skilled is: chance of black associate * fraction of black agents (born in t) who are skilled + chance of white associate * fraction of white agents (born in t) who are skilled. The network capital of people in the two groups is the same only if: p=0 (there is no segregation) or sW(t)=sB(t) (there is no initial group inequality in human capital).

Given the two above equations, we get a “law of motion” for human capital decisions: If qG(t+1)>q’, sG(t+1)=1; If qG(t+1)<q’, sG(t+1)=0 (with G in {B,W}). In words, if network capital is above the necessary threshold level, all agents of that group become skilled. If network capital is below the necessary threshold level, all agents of that group stay unskilled. Note that in this simplified model all agents make the same decisions within racial groups.

From parameter values to group outcomes

How do we get real-world implications from this model? We know black people have been historically economically disadvantaged in the United States. But, how do we integrate this fact into the model’s framework? Well, we can set the initial state of the world to the extreme (sB, sW)=(0,1), meaning all black agents start of as unskilled and all white agents start of as skilled (perhaps due to separate but unequal hospitals/schools/etc). Based on that initial state, I can then see what the future states of the world will be under the previously derived law of motion.

  1. Let’s assume complete integration, p=0. Given (sB, sW)=(0,1), then qW(t+1)=qB(t+1)=1-x, and since cost is only dependent on network, cost is then c(1-x) for both groups. Thus, all black and white agents will make the same decisions and there will be no asymmetric stable steady state.
  2. Now, consider complete segregation, p=1. Given (sB, sW)=(0,1) again, then qB=0 and qW=1. So, cost is c(0) for black agents and c(1) for white agents. Recall c(0)<h<c(1), meaning that there is necessarily an asymmetric stable steady state. (No black agents will become skilled and all white agents will become skilled.)

Given the points above, the authors explain,

“Since there exists an asymmetric stable steady state under complete segregation but none under complete integration, one may conjecture that there is a threshold level of segregation such that persistent group inequality is feasible if and only if the actual segregation level exceeds this threshold.”

Let’s prove this conjecture. (The following is my summary of the appendix proofs for propositions 1 and 2.) First, we find the unique x” (black population share) threshold s.t. c(1-x”)=h.

  1. Consider x'<x”, then c(1-x’)<h because cost decreases in its argument. Given (sB, sW)=(0,1), qB=(1-p)(1-x’) and qW=p+(1-p)(1-x’). So, c(qW) is decreasing in p and less than h at p=0 (since c(1-x’)<h). Moreover, c(qB) is increasing in p and c(qB)=c((1-0)(1-x’))<h when p=0 but c(qB)=c(0)>h when p=1, thus there is a unique p'(x’) such that c(qB)=h. For all p>p'(x’), we have c(qW)<h<c(qB), meaning (sB,sW)=(0,1) is a steady state. But, for all p<p'(x’), we have c(qW)<c(qB)<h, which makes it optimal for both groups of workers to become skilled and so there is a transition to (1,1). Since that then lowers both costs, the condition c(qW)<c(qB)<h continues to hold which makes (1,1) the stable steady state instead of (0,1).
  2. Consider x’>x”, so c(1-x’)>h. By the same logic, c(qB) is increasing in p and greater than h when p=0 since c(qB)=c(1-x’)>h. So c(qB)>h for all pc(qW) is decreasing in p and c(qW)=c((1-0)(1-x’))>h when p=0 but c(qW)=c(1)<h when p=1, thus there is a unique p'(x’) such that c(qW)=h. For all p>p'(x’), we have c(qW)<h<c(qB), meaning (sB,sW)=(0,1) is a steady state. But, for all p<p'(x’), we have h<c(qW)<c(qB), which makes it optimal for both groups of workers to not become skilled and so transition to (0,0). Since that then increases both costs, the condition h<c(qW)<c(qB) continues to hold which makes (0,0) the stable steady state instead of (0,1).

In sum, given the fraction x, there is a threshold level of segregation p* above which (sB,sW)=(0,1) is a steady state (persistent group inequality), but below which the model shifts to a symmetric steady state. Whether the eventual steady state means welfare improving equalization — (sB,sW)=(1,1) — or welfare reducing equalization — (sB,sW)=(0,0) — depends on the fraction x. If the originally skilled group is large enough, all agents will become skilled, otherwise, all agents will become unskilled.

In words, the model shows that group inequality persists if segregation is high enough. If segregation is below the threshold for maintaining inequality, groups inequality disappears, but whether that is through a loss of everyone’s skills or a gain of everyone’s skills depends on the population shares that define the model world. The authors use the following graph to depict these conclusions:

fg1

Note that Bowles and Sethi (2006) use different variables than me for the parameters of interest. Also, the authors normalize the benefits of human capital accumulation to 0.

This figure sharply summarizes the model’s results thus far. It succinctly and clearly shows how two parameters (population share and segregation) determine the eventual state of the world. I usually use graphs to visualize tangible data, but they are just as useful in visualizing concepts or theoretical results, as seen here. (The graph I built depicting when to share an idea à la Koszegi is another example of visualizing how model parameters relate to outcomes.)

Suspiciously slick?

There are a few issues with the model dynamics that you might have noticed reading the above summary. Namely, everyone is the same within racial groups and convergence occurs in a single period. This feels less interesting than a slower convergence differing by other individual characteristics.

Much of the aforementioned simplicity comes from the assumption that ability is the same for all agents (ie, ability is homogenous). However, the model can be tweaked to make ability heterogenous — that is exactly what the authors do later in the paper. As such, the cost of human capital investment then varies with ability as well as network capital. So, cost now depends on something that is specific to the individual (ability) as well as common to the group (racial identity). (Note: the model assumes no group differences in cost function or ability distributions.)

Moreover, the cost function c(a, q) is then decreasing in both ability and network capital level. In words, it is easier to become skilled when exposed to more skilled people (due to networks), and easier to become skilled when endowed with higher natural ability. For any given network capital level q, there is some threshold ability level a'(q) such that those above the cut-off become skilled and those below do not. Similar to the reasoning in the homogenous case, an agent needs c(a, q)<h to become skilled. In effect, the relevant threshold is defined as the a'(q) s.t. c(a, q)=h. (Any ability above that, the person becomes skilled. Any ability below that, the person does not.)

An interesting insight on this topic is that: “individuals belonging to groups exposed to higher levels of human capital will themselves accumulate human capital at lower ability thresholds relative to individuals in groups with initially lower levels of human capital. This difference will be greater when segregation levels are high.” So, in this more complex build of the model, black people have to boast higher ability levels than their white counterparts to make human capital accumulation cost beneficial… all due to the historical disadvantages build into their social networks. And that all came out of a bunch of threshold rules and variables!

Recap: behold the power of models

Bowles and Sethi built this model with an eye towards examples of enduring economic injury. They saw an empirical fact (the persistent black-white wage gap) and then put structure on their intuitive answers to naturally occurring questions: If there was zero discrimination, would the wage gap still endure? How would that work? Through what channels?

At the end of the day, their model hinges on a few items: (1) the inverse relationship between network capital and cost of skill acquisition, and (2) network formation (as influenced by segregation and population share). The first is an assumption based in the reality of human success and failure — it doesn’t always “take a village” but that definitely helps. The second is a distilled sketch of a complex and idiosyncratic process —  network formation depends on two parameters (segregation and population share) and leads to useful, comprehensible insights. The previously showcased graph is especially important for highlighting the potential difficulty of policy decisions — what part of the graph are we in?

Blue book’d

Bowles and Sethi (2006) illustrates how social networks, population demographics, and decision-making interact to determine the endurance of economic injury. The model also illustrates how writing a blog can sometimes help in your academic life — as it turns out, I managed to describe and solve out pieces of this model in my Social Economics blue book exam. Two birds, one paper.


© Alexandra Albright and The Little Dataset That Could, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts, accompanying visuals, and links may be used, provided that full and clear credit is given to Alex Albright and The Little Dataset That Could with appropriate and specific direction to the original content.
Advertisements

How I Learned to Stop Worrying and Love Economics

Words, words, words
Intro

Many months ago, in October, the Economics Nobel prize was awarded to Angus Deaton. Beyond experiencing sheer joy at having beaten my friend Mike at predicting the winner, I also was overwhelmed by the routine, yearly backlash against the discipline in the form of articles shared widely across any and all social networks. Of particular interest to me this year was the Guardian’s piece “Don’t let the Nobel prize fool you. Economics is not a science.” The dialogue surrounding this article made me incredibly curious to investigate my own thoughts on the discipline and its place in the realm of “the sciences.” In a frenzy of activity that can only be accurately explained as the result of a perfect storm of manic energy and genuine love for an academic topic, I wrote up a response not only to this article, but also to my own sense of insecurity in studying a discipline that is often cut down to size by the public and other academics.

In my aforementioned frenzy of activity, I found myself constantly talking with Mike (in spite of my status as the superior Nobel forecaster) about the definition of science, hierarchies of methodologies for causal inference, the role of mathematics in applied social science, and our own personal experiences with economics. Eventually, I linked the Guardian article to him in order to explain the source of my academic existential probing. As another economics researcher, Mike had a similarly strong reaction to reading the Guardian’s piece and ended up writing his own response as well.

So, I am now (albeit months after the original discussion) using this space to post both responses. I hope you’ll humor some thoughts and reactions from two aspiring economists.

Alex responds

I developed a few behavioral ticks in college when asked about my major.  First, I would blurt out “Math” and, after a brief pause of letting the unquestioned legitimacy of that discipline settle in, I would add “and Econ!”–an audible exclamation point in my voice. I had discovered through years of experience that the more enthusiastic you sounded, the less likely someone would take a dig at your field. However, nonetheless, I would always brace myself for cutting criticism as though the proofs I attempted to complete in Advanced Microeconomics were themselves the lynchpin of the financial crisis.

In the court of public opinion, economics is often misunderstood as the get-rich-quick major synonymous with Finance. The basic assumptions of self-interest and rationality that the discipline gives its theoretical actors are stamped onto its practitioners and relabeled as hubris and heartlessness. Very few students are seeking out dreamy economics majors to woo them with illustrations of utility functions in which time spent together is a variable accompanied by a large positive coefficient. (The part where you explain that there is also a squared term with a negative coefficient since the law of diminishing marginal utility still applies is not as adorable. Or so I’ve been told.)

It can be hard to take unadulterated pride in a subject that individuals on all sides of the techie/fuzzy or quant/qual spectrum feel confident to discredit so openly. Economics is an outsider to many different categories of academic study; it is notably more focused on quantitative techniques than are other social sciences but its applications are to human phenomena, which rightfully ousts it from the exclusive playground of the hard sciences. I admit I have often felt awkward or personally slighted when accosted by articles like Joris Luyendijk’s “Don’t let the Nobel prize fool you. Economics is not a science.” which readily demeans contributions to economics simply by both appealing to the unsexiness of technical jargon and by contrasting these with the literature and peace prizes:

Think of how frequently the Nobel prize for literature elevates little-known writers or poets to the global stage, or how the peace prize stirs up a vital global conversation: Naguib Mahfouz’s Nobel introduced Arab literature to a mass audience, while last year’s prize for Kailash Satyarthi and Malala Yousafzai put the right of all children to an education on the agenda. Nobel prizes in economics, meanwhile, go to “contributions to methods of analysing economic time series with time-varying volatility” (2003) or the “analysis of trade patterns and location of economic activity” (2008).

While comparing strides in economic methods to the contributions of peace prize recipients is akin to comparing apples to dragon fruit, Luyendijk does have a point that “[m]any economists seem to have come to think of their field in scientific terms: a body of incrementally growing objective knowledge.” When I first starting playing around with regressions in Stata as a sophomore in college, I was working under the implicit assumption that there was one model I was seeking out. My different attempted specifications were the statistical equivalent of an archeologist’s whisks of ancient dust off of some fascinating series of bones. I assumed the skeleton would eventually peek out from the ground, undisputedly there for all to see. I assumed this was just like how there was one theorem I was trying to prove in graph theory–sure, there were multiple modes of axiomatic transport available to end up there, but we were bound to end up in the same place (unless, of course, I fell asleep in snack bar before I could really get there). I quickly realized that directly transplanting mathematical and statistical notions into the realm of social science can lead to numbers and asterisks denoting statistical significance floating around in zero gravity with nothing to pin them down. Tying the 1’s, 3’s, and **’s  down requires theory and we, as economic actors ourselves who perpetually seek optimal solutions, often entertain the fantasy of a perfectly complex and complete model that could smoothly trace the outline and motions of our dynamic, imperfect society.

However, it is exactly Luyendijk’s point that “human knowledge about humans is fundamentally different from human knowledge about the natural world” that precludes this type of exact clean solution to fundamentally human questions in economics–a fact that has and continues to irk me, if not simply because of the limitations of computational social science, then because of the imperfection and incompleteness of human knowledge (even of our own societies, incentives, and desires) of which it reminds me. Yet, as I have spent more and more time steeped in the world of economics, I have come to confidently argue that the lack of one incredibly complex model that manages to encapsulate “timeless truth[s]” about human dynamics does not mean models or quantitative methods have no place in the social sciences. Professor Dani Rodek, in probably my favorite piece of writing on economics this past year, writes that,

Jorge Luis Borges, the Argentine writer, once wrote a short story – a single paragraph – that is perhaps the best guide to the scientific method. In it, he described a distant land where cartography – the science of making maps – was taken to ridiculous extremes. A map of a province was so detailed that it was the size of an entire city. The map of the empire occupied an entire province.

In time, the cartographers became even more ambitious: they drew a map that was an exact, one-to-one replica of the whole empire. As Borges wryly notes, subsequent generations could find no practical use for such an unwieldy map. So the map was left to rot in the desert, along with the science of geography that it represented.

Borges’s point still eludes many social scientists today: understanding requires simplification. The best way to respond to the complexity of social life is not to devise ever-more elaborate models, but to learn how different causal mechanisms work, one at a time, and then figure out which ones are most relevant in a particular setting.

In this sense, “focusing on complex statistical analyses and modeling” does not have to be to “the detriment of the observation of reality,” as Luyendijk states. Instead, emulating the words of Gary King, theoretical reasons for models can serve as guides to our specifications.

In my mind, economics requires not just the capability to understand economic theory and empirics, but also the humility to avoid mapping out the entire universe of possible economic interactions, floating coefficients, and greek numerals. Studying economics requires the humility to admit that economics itself is not an exact science, but also the understanding that this categorization does not lessen the impact of potential breakthroughs, just maybe the egos of researchers like myself.

WHERE IS ECONOMICS?

via xkcd. WHERE IS ECONOMICS?

Mike responds

Economics is an incredibly diverse field, studying topics ranging from how match-fixing works among elite sumo wrestlers to why the gap between developed and developing countries is as large as it is. When considering a topic as broad as whether the field of economics deserves to have a Nobel prize, then, it is important to consider the entire field before casting judgment.

Joris Luyendijk, in his article “Don’t let the Nobel prize fool you. Economics is not a science,” directs most of his criticisms of economics at financial economics specifically instead of addressing the field of economics as a whole. We can even use Mr. Luyendijk’s preferred frame of analysis, Nobel prizes awarded, to see the distinction between finance and economics. Out of the 47 times the economics Nobel has been awarded, it was only given in the field of Financial Economics three times.  And in his article, Mr. Luyendijk only addresses one of these three Nobels. I would argue that since financial economics is but a small part of the entire economics field, even intense criticism of financial economics should not bring the entire economics field down with it.

A closer look at the Nobels awarded in financial economics reveals that the award is not “fostering hubris and leading to disaster” as Mr. Luyendijk claims. The first Nobel awarded in financial economics was presented in 1990, for research on portfolio choice and corporate finance and the creation of the Capital Asset Pricing Model (CAPM). Far from causing financial contagion, to which Mr. Luyendijk hints the economics Nobel prize has contributed, optimal portfolio theory examines how to balance returns and risk, and CAPM provides a foundation for pricing in financial markets. More recently, the 2013 Nobel was again awarded in financial economics, for advances in understanding asset pricing in the short and long term, applications of which include the widely used Case-Shiller Home Price Index.

The second Nobel awarded for financial economics, to Merton and Scholes in 1997, does deserve some criticism, though. However, I would argue that the Black-Scholes asset pricing model gained traction long before the 1997 Nobel Prize, and continues to be used long after the collapse of the hedge fund Merton and Scholes were part of, because of its practical usefulness and not because of any legitimacy the Nobel prize might have endowed it with. The quantification of finance would have happened with or without the Nobel prize, and I find it hard to believe that the existence of the economics Nobel prize causes profit-driven financiers to blindly believe that the Black-Scholes formula is a “timeless truth.”

So if economics is not finance, then what is it? I would argue that an identifying feature of applied economics research is the search for causality. Specifically, much of economics is a search for causality in man-made phenomena. To model human behavior in a tractable way requires making assumptions and simplifications. I have to agree with Mr. Luyendijk that economics needs to be more forthright about those assumptions and limitations – economists may be too eager to take published findings as “timeless truths” without thinking about the inherent limitations of those findings.

Failing to realize the limitations of such findings can come back to bite. For example the Black-Scholes model assumes that securities prices follow a log-normal process, which underestimates the probability of extreme events, such as the ones that led to the collapse of Long-Term Capital Management. But the failure of some to pay attention to well-known limitations of important findings should not diminish economics as a whole.

Applied economics is also distinct from other social sciences in that it attempts to apply the tools of the hard sciences to human problems. I agree with Alex and Mr. Luyendijk that knowledge about the physical and human worlds is inherently different. The heterogeneity of human behavior creates messy models, and these models require the creation of new mathematical and statistical methods to understand them. This “mathematical sophistication” that Mr. Luyendijk bemoans is not just math for math’s sake, it is using tools from the hard sciences to explain real-world phenomena (and what’s wrong with pure math anyways?).

Despite the occasional messy solution, the ideal study in applied economics is still a controlled experiment, as it is in many hard sciences. In the human world, however, this experimental ideal is difficult to implement. Much of applied economics thus relies on quasi-experimental methods, trying to approximate experiments with observational data by finding natural experiments, for example, when controlled experiments are not feasible. Still other branches of economics use actual economic experiments, such as randomized control trials (RCTs). The idea behind economics RCTs is the same as that behind clinical drug trials, where people are randomly separated into treatment and control groups to test the effect of an intervention. RCTs have become increasingly popular, especially in development work, over the past decade or so. Given Mr. Luyendijk’s concern about how divorced from the real world economics has become, he would be impressed by the amount of practical, detailed planning required to successfully implement RCTs, and be taken aback by how different this fieldwork is from the academics spending all day thinking of complex and impractical models that he envisions.

A Nobel prize in economics will probably be awarded for advances in the methodology and applications of RCTs, the closest economics can come to the hard sciences that Mr. Luyendijk so reveres, sometime in the next decade. What will he say then?

Endnote

Mike and I were Research Assistants at Williams College together during summer 2013. Mike is currently on a Fulbright in China working with Stanford’s Rural Education Action Program, which conducts RCTs in rural China. We are both happy to hear any feedback on the linked articles and our responses, as we are both genuinely interested in thinking through where economics (and computational social sciences on the whole) should belong in scientific dialogue.


© Alexandra Albright and The Little Dataset That Could, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts, accompanying visuals, and links may be used, provided that full and clear credit is given to Alex Albright and The Little Dataset That Could with appropriate and specific direction to the original content.