1077 stories

The Social Justice Warriors are right

1 Comment and 16 Shares

As you might know, I haven’t been exactly the world’s most consistent fan of the Social Justice movement, nor has it been the most consistent fan of me.

I cringe when I read about yet another conservative college lecture shut down by mob violence; or student protesters demanding the firing of a professor for trying gently to argue and reason with them; or an editor forced from his position for writing a (progressive) defense of “cultural appropriation”—a practice that I take to have been ubiquitous for all of recorded history, and without which there wouldn’t be any culture at all.  I cringe not only because I know that I was in the crosshairs once before and could easily be again, but also because, it seems to me, the Social Justice scalp-hunters are so astoundingly oblivious to the misdirection of their energies, to the power of their message for losing elections and neutering the progressive cause, to the massive gift their every absurdity provides to the world’s Fox Newses and Breitbarts and Trumps.

Yet there’s at least one issue where it seems to me that the Social Justice Warriors are 100% right, and their opponents 100% wrong. This is the moral imperative to take down every monument to Confederate “war heroes,” and to rename every street and school and college named after individuals whose primary contribution to the world was to defend chattel slavery.  As a now-Southerner, I have a greater personal stake here than I did before: UT Austin just recently removed its statue of Jefferson Davis, while keeping up its statue of Robert E. Lee.  My kids will likely attend what until very recently was called Robert E. Lee Elementary—this summer renamed Russell Lee Elementary.  (My suggestion, that the school be called T. D. Lee Parity Violation Elementary, was sadly never considered.)

So I was gratified that last week, New Orleans finally took down its monuments to slavers.  Mayor Mitch Landrieu’s speech, setting out the reasons for the removal, is worth reading.

I used to have little patience for “merely symbolic” issues: would that offensive statues and flags were the worst problems!  But it now seems to me that the fight over Confederate symbols is just a thinly-veiled proxy for the biggest moral question that’s faced the United States through its history, and also the most urgent question facing it in 2017.  Namely: Did the Union actually win the Civil War? Were the anti-Enlightenment forces—the slavers, the worshippers of blood and land and race and hierarchy—truly defeated? Do those forces acknowledge the finality and the rightness of their defeat?

For those who say that, sure, slavery was bad and all, but we need to keep statues to slavers up so as not to “erase history,” we need only change the example. Would we similarly defend statues of Hitler, Himmler, and Goebbels, looming over Berlin in heroic poses?  Yes, let Germans reflect somberly and often on this aspect of their heritage—but not by hoisting a swastika over City Hall.

For those who say the Civil War wasn’t “really” about slavery, I reply: this is the canonical example of a “Mount Stupid” belief, the sort of thing you can say only if you’ve learned enough to be wrong but not enough to be unwrong.  In 1861, the Confederate ringleaders themselves loudly proclaimed to future generations that, indeed, their desire to preserve slavery was their overriding reason to secede. Here’s CSA Vice-President Alexander Stephens, in his famous Cornerstone Speech:

Our new government is founded upon exactly the opposite ideas; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.

Here’s Texas’ Declaration of Secession:

We hold as undeniable truths that the governments of the various States, and of the confederacy itself, were established exclusively by the white race, for themselves and their posterity; that the African race had no agency in their establishment; that they were rightfully held and regarded as an inferior and dependent race, and in that condition only could their existence in this country be rendered beneficial or tolerable. That in this free government all white men are and of right ought to be entitled to equal civil and political rights; that the servitude of the African race, as existing in these States, is mutually beneficial to both bond and free, and is abundantly authorized and justified by the experience of mankind, and the revealed will of the Almighty Creator, as recognized by all Christian nations; while the destruction of the existing relations between the two races, as advocated by our sectional enemies, would bring inevitable calamities upon both and desolation upon the fifteen slave-holding states.

It was only when defeat looked inevitable that the slavers started changing their story, claiming that their real grievance was never about slavery per se, but only “states’ rights” (states’ right to do what, exactly?). So again, why should take the slavers’ rationalizations any more seriously than we take the postwar epiphanies of jailed Nazis that actually, they’d never felt any personal animus toward Jews, that the Final Solution was just the world’s biggest bureaucratic mishap?  Of course there’s a difference: when the Allies occupied Germany, they insisted on thorough de-Nazification.  They didn’t suffer streets to be named after Hitler. And today, incredibly, fascism and white nationalism are greater threats here in the US than they are in Germany.  One reads about the historic irony of some American Jews, who are eligible for German citizenship because of grandparents expelled from there, now seeking to move there because they’re terrified about Trump.

By contrast, after a brief Reconstruction, the United States lost its will to continue de-Confederatizing the South.  The leaders were left free to write book after book whitewashing their cause, even to hold political office again.  And probably not by coincidence, we then got nearly a hundred years of Jim Crow—and still today, a half-century after the civil rights movement, southern governors and legislatures that do everything in their power to disenfranchise black voters.

For those who ask: but wasn’t Robert E. Lee a great general who was admired by millions? Didn’t he fight bravely for a cause he believed in?  Maybe it’s just me, but I’m allergic to granting undue respect to history’s villains just because they managed to amass power and get others to go along with them.  I remember reading once in some magazine that, yes, Genghis Khan might have raped thousands and murdered millions, but since DNA tests suggest that ~1% of humanity is now descended from him, we should also celebrate Khan’s positive contribution to “peopling the world.” Likewise, Hegel and Marx and Freud and Heidegger might have been wrong in nearly everything they said, sometimes with horrific consequences, but their ideas still need to be studied reverently, because of the number of other intellectuals who took them seriously.  As I reject those special pleas, so I reject the analogous ones for Jefferson Davis, Alexander Stephens, and Robert E. Lee, who as far as I can tell, should all (along with the rest of the Confederate leadership) have been sentenced for treason.

This has nothing to do with judging the past by standards of the present. By all means, build statues to Washington and Jefferson even though they held slaves, to Lincoln even though he called blacks inferior even while he freed them, to Churchill even though he fought the independence of India.  But don’t look for moral complexity where there isn’t any.  Don’t celebrate people who were terrible even for their own time, whose public life was devoted entirely to what we now know to be evil.

And if, after the last Confederate general comes down, the public spaces are too empty, fill them with monuments to Alan Turing, Marian Rejewski, Bertrand Russell, Hypatia of Alexandria, Emmy Noether, Lise Meitner, Mark Twain, Srinivasa Ramanujan, Frederick Douglass, Vasili Arkhipov, Stanislav Petrov, Raoul Wallenberg, even the inventors of saltwater taffy or Gatorade or the intermittent windshield wiper.  There are, I think, enough people who added value to the world to fill every city square and street sign.

Read the whole story
25 days ago
26 days ago
26 days ago
Earth, Sol system, Western spiral arm
Share this story
1 public comment
26 days ago
GOP Delenda Est.

two economists ask teachers to behave as irrational actors

1 Comment and 2 Shares

I was considering doing a front-to-back fisking of this interview of Raj Chetty, Professor of Economics at Stanford University, conducted by the libertarian economist Tyler Cowen. Despite Chetty’s obviously impressive credentials, he says several things in the interview that simply don’t hold up to scrutiny, in particular regarding the simultaneity problem1 and the impact of the shared environment2 I’ve decided to just focus on one key point, though.

The standard neoliberal ed reform argument goes like this: the major entrenched socioeconomic and racial inequalities in this country are no excuse for poor quantitative outcomes for groups of students; teachers and schools, despite all of the evidence to the contrary, control most of the variation in educational outcomes; therefore our perceived education problems are the result of lazy, untalented teachers; introducing a market for schooling will force schools to get rid of those teachers and metrics will improve. Now this story has failed to play out this way again and again in places like Detroit and Washington DC, but we’ll let that slide for now. If we accept this argument on its own terms, we need to get many talented people into teaching and replace the hundreds of thousands of “bad” teachers we’d be getting rid of.

Ed reform types are typically cagey about the scale of teacher dismissals – they hate to actually come out and say “I’d like to get hundreds of thousands of teachers fired” – but based on their own numbers, their own claims about the size and extent of the problem, that’s what needs to happen. You can’t simultaneously say that there’s a nationwide education crisis that needs to be solved by firing teachers and avoid the conclusion that huge numbers need to be fired. If reformers claim that even one out of every ten public teachers needs to be let go (a low number in reform rhetoric), we’re talking about more than 300,000 fired teachers.

I’ve argued before that the idea that market economics are effective means to solve educational problems falls apart once you recognize that, unlike a factory building a widget, educators don’t control most of what contributes to a child’s learning outcomes. But suppose you do believe in the standard conservative economics take on school reform: how can Chetty’s ideas make sense, if we trust young workers in a labor market to act in their own rational best interest? Chetty believes that we need, at scale, to “either retrain or dismiss the teachers who are less effective, [to] substantially increase productivity without significantly increasing cost.” Without increasing costs, in other words, by raising teacher salaries. The median teacher in this country makes ~$57,000 a year; the 75th percentile makes ~$73k, and the 25th percentile, ~$45k. Compare with median lawyer salaries well above $100,000 a year and median doctor salaries close to $200,000, or an average of $125,000+ for MBA graduates. So we’re not going to pay teachers more, and we’re going to sufficiently erode labor protections, if we’re going to dismiss those less effective teachers. This doesn’t sound like a good deal already.

Of course, teachers don’t just suffer from low median wages compared to people with similar levels of schooling. They also suffer from far lower social status than they are typically afforded in other countries, as Dr. Chetty acknowledges:

Yeah, I think status seems incredibly important. My sense of the K–12 education system in the US is, unfortunately for many kids graduating from top colleges, teaching is not near the top of the list of professions that they’d consider. It’s partly because, in a sense, they can’t afford to be teachers because it entails such a pay cut. But also because they feel that it’s not the most prestigious career to pursue.

Why yes, Dr. Chetty, it’s true! Teachers don’t get a lot of prestige in this country! Maybe that’s because well-paid celebrity academics who make several times the median teacher salary – people like you – talk casually about firing them en masse and insist that they are the source of poor metrics! The ed reform movement has insulted the profession of public school teacher for years. Popular expressions of that philosophy, like the execrable documentary Waiting for “Superman, have contributed to widespread assumptions that students are failing because their teachers are lazy and corrupt. How can a political movement that has relentlessly insulted the teaching profession not contribute to declining interest in being part of that profession?

Here in New York, the numbers are clear: we’re already facing a serious teacher shortage.

What Chetty and Cowen are asking for makes no sense according to their own manner of thinking. Dr. Chetty, Dr. Cowen: there is no bullpen. Even if I thought that teachers controlled far more of the variance in quantitative education metrics than I do, and even if I didn’t have objections about fair labor practices against removing hundreds of thousands of teachers, we would be stuck with this simple fact. We do not have hundreds of thousands of talented young professionals, eager to forego the far greater rewards available in the private sector, ready to jump in and start teaching. And we certainly won’t have such a thing if we share Chetty’s resistance to paying teachers more and his commitment to making it easier to fire them.

So: no higher salaries for a relatively low-paying profession, eroding the job security that is the most treasured benefit of the job, continuing to degrade and insult the current workforce as lazy and undeserving, getting rid of hundreds of thousands of them, and yet somehow attracting hundreds of thousands of more talented, more committed young workers to become teachers.

According to what school of economics, exactly, is such a thing possible?


Read the whole story
25 days ago
Share this story
1 public comment
25 days ago
«I’ve argued before that the idea that market economics are effective means to solve educational problems falls apart once you recognize that, unlike a factory building a widget, educators don’t control most of what contributes to a child’s learning outcomes. But suppose you do believe in the standard conservative economics take on school reform: how can Chetty’s ideas make sense, if we trust young workers in a labor market to act in their own rational best interest?»
South Burlington, Vermont

Why do so few people major in computer science?

1 Share

In 2005, about 54,000 people in the US earned bachelor’s degrees in computer science. That figure was lower every year afterwards until 2014, when 55,000 people majored in CS. I’m surprised not only that the figure is low; the greater shock is that was flat for a decade. Given high wages for developers and the cultural centrality of Silicon Valley, shouldn’t we expect far more people to have majored in computer science?

This is even more surprising when we consider that 1.90 million people graduated with bachelor’s degrees in 2015, which is 31% higher than the 1.44 million graduates in 2005. (Data is via the National Center for Education Statistics, Digest of Education Statistics) That means that the share of people majoring in computer science has decreased, from 3.76% of the all majors in 2005 to 3.14% of all majors in 2015. Meanwhile, other STEM majors have grown over the same period: “engineering” plus “engineering technologies” went from 79,544 to 115,096, a gain of 45%; “mathematics and statistics” from 14,351 to 21,853, a gain of 52%; “physical sciences and science technologies” from 19,104 to 30,038, a gain of 57%; “biological and biomedical sciences” from 65,915 to 109,896, a gain of 67%. “Computer sciences and information technologies?” From 54,111 in 2005 to 59,581 in 2015, a paltry 10.1%.

If you’d like a handy chart, I graphed the growth here, with number of graduates normalized to 2005.

(Addendum: Several people have pointed out that 2005 was an idiosyncratic year, and that I should not rebase figures from that date. I graphed it from this point because in the NCES dataset I’ve been using breaks out the data by one-year intervals only since 2005. Scroll to the end of the post to see data on graduates from 1975, which shows clearly that 2005 was a peak for graduates. A more full discussion would involve the impact of the dotcom bubble; see below.)

I consider this a puzzle because I think that people who go to college decide on what to major in significantly based on two factors: earning potential and whether a field is seen as high-status. Now let’s establish whether majoring in CS delivers either.

Are wages high? The answer is yes. The Bureau of Labor Statistics has data on software developers. The latest data we have is from May 2016, in which the median annual pay for software developers is $106,000; pretty good, considering that the median annual pay for all occupations is $37,000. But what about for the lowest decile, which we might consider a proxy for the pay of entry level jobs that fresh grads can expect to claim? That figure is $64,650, twice the median annual pay for all occupations. We can examine data from a few years back as well. In 2010, median pay for software developers was $87,000; pay at the lowest decile was $54,000. Neither were low, now both have grown.

Now we can consider whether someone majoring in computer science can expect to join a high-status industry. That’s more difficult to prove rigorously, but I submit the answer is yes. I went to high school during the late-aughts, when the financial crisis crushed some of Wall Street’s allure, and Silicon Valley seemed glamorous even then. Google IPO’d in 2004, people my age all wanted to buy iPhones and work for Steve Jobs, and we were all signing up for Facebook. People talked about how cool it would be to intern at these places. One may not expect to end up at Google after college, but that was a great goal to aspire to. Industries like plastics or poultry or trucking don’t all have such glittering companies that attract.

I tweeted out this puzzle and received a variety of responses. Most of them failed to satisfy. Now I want to run through some common solutions offered to this puzzle along with some rough and dirty argument on what I find lacking about them.

Note: All data comes from the Digest of Education Statistics, Department of Education.


1. Computer science is hard. This is a valid observation, but it doesn’t explain behaviors on the margin. CS is a difficult subject, but it’s not the only hard major. People who proclaim that CS is so tough have to explain why so many more people have been majoring in math, physics, and engineering; remember, all three majors have seen growth of over 40% between 2005 and 2015, and they’re no cakewalks either. It’s also not obvious that their employment prospects are necessarily more rosy than the one for CS majors (at least for the median student who doesn’t go to a hedge fund). Isn’t it reasonable to expect that people with an aptitude for math, physics, and engineering will also have an aptitude for CS? If so, why is it the only field with low growth?

On the margin, we should expect high wages to attract more people to a discipline, even if it’s hard. Do all the people who are okay with toiling for med school, law school, or PhD programs find the CS bachelor’s degree to be unthinkably daunting?

2. You don’t need a CS degree to be a developer. This is another valid statement that I don’t think explains behaviors on the margin. Yes, I know plenty of developers who didn’t graduate from college or major in CS. Many who didn’t go to school were able to learn on their own, helped along by the varieties of MOOCs and boot camps designed to get them into industry.

It might be true that being a software developer is the field that least requires a bachelor’s degree with its associated major. Still: Shouldn’t we expect some correlation between study and employment here? That is, shouldn’t having a CS major be considered a helpful path into the industry? It seems to me that most tech recruiters look on CS majors with favor.

Although there are many ways to become a developer, I’d find it surprising if majoring in CS is a perfectly useless way to enter the profession, and so people shun it in favor of other majors.

3. People aren’t so market-driven when they’re considering majors. I was a philosophy major, and no I didn’t select on the basis of its dazzling career prospects. Aren’t most people like me when it comes to selecting majors?

Maybe. It’s hard to tell. Evidence for includes a study published in the Journal of Human Capital, which suggests that people would reconsider their majors if they actually knew what they could earn in their associated industries. That is, they didn’t think hard enough about earning potentials when they were committing to their majors.

We see some evidence against this idea if we look at the tables I’ve been referencing. Two of the majors with the highest rates of growth have been healthcare and law enforcement. The number of people graduating with bachelor’s degrees in “health professions and related programs” more than doubled, from 80,865 in 2005 to 216,228 in 2015. We can find another doubling in “homeland security, law enforcement, and firefighting,” from 30,723 in 2005 to 62,723 in 2015. Haven’t these rents-heavy and government-driven sectors been pretty big growth sectors in the last few years? If so, we can see that people have been responsive to market-driven demand for jobs.

(As a sidenote, if we consider the pipeline of talent to be reflective of expectations of the economy, and if we consider changes in the number of bachelor’s degrees to be a good measure of this pipeline, then we see more evidence for Alex Tabarrok’s view that we’re becoming a healthcare-warfare state rather than an innovation nation.)

In the meantime, I’m happy to point out that the number of people majoring in philosophy has slightly declined between 2005 to 2015, from 11,584 to 11,072. It’s another sign that people are somewhat responsive to labor market demands. My view is that all the people who are smart enough to excel as a philosophy major are also smart enough not to actually pursue that major. (I can’t claim to be so original here—Wittgenstein said he saw more philosophy in aerospace engineering than he did in philosophy.)

4. Immigrants are taking all the jobs. I submit two ways to see that not all demand is met by immigrants. First, most immigrants who come to the US to work are on the H1B visa; and that number has been capped at 65,000 every year since 2004. (There are other visa programs available, but the H1B is the main one, and it doesn’t all go to software engineers.) Second, rising wages should be prima facie evidence that there’s a shortage of labor. If immigrants have flooded the market, then we should see that wages have declined; that hasn’t been the case.

To say that immigrants are discouraging people from majoring in CS requires arguing that students are acutely aware of the level of the H1B cap, expect that it will be lifted at some point in the near future, and therefore find it too risky to enter this field because they think they’ll be competing with foreign workers on home soil. Maybe. But I don’t think that students are so acutely sensitive to this issue.

5. Anti-women culture. Tech companies and CS departments have the reputation of being unfriendly to women. The NCES tables I’m looking at don’t give a breakdown of majors by gender, so we can’t tell if the shares of men and women majoring in CS has differed significantly from previous decades. One thing to note is that the growth of people earning CS majors has been far below the growth of either gender earning bachelor’s degrees.

More women graduate from college than men. (Data referenced in this paragraph comes from this table.) In 1980, each gender saw about 465,000 new grads. Since then, many more women have earned degrees than men; in 2015, 812,669 men earned bachelor’s degrees, while 1,082,265 women did. But since 2005, the growth rate for women earning bachelor’s has not significantly outpaced that of men. 32.5% more men earned bachelor’s degrees in the previous decade, a slightly higher rate than 31.5% for women. It remains significant that women are keeping that growth rate for over a higher base, but it may be that it’s no longer the case that growth can be much higher than that of men in the future.

What’s important is that the growth rate of 30% for both genders is below that of 10% for CS majors over this time period. We can’t pick out the breakdown of genders from this dataset, but I’d welcome suggestions on how to find those figures in the comments below.

6. Reactionary faculty. The market for developers isn’t constrained by some guild like the American Medical Association, which caps the number of people who graduate from med schools in the name of quality control.

CS doesn’t have the same kind of guild masters, unless we want to count faculty to be serving this function on their own. It could be that people serving on computer science faculties are contemptuous of people who want high pay and the tech life; instead they’re looking for the theory-obsessed undergraduate who are as interested in say Turing and von Neumann as much as they are. And so in response to a huge new demand for CS majors, they significantly raise standards, allowing no more than say 500 people to graduate if a decade ago only 450 did. Rather than catering to the demands of the market, they raise standards so that they’re failing an even higher proportion of students to push them out of their lovely, pure, scholarly field.

I have no firsthand experience. To determine this as a causal explanation, we would have to look into how many more students have been graduating from individual departments relative to the number of people who were weeded out. The latter is difficult to determine, but it may be possible to track if particular departments have raised standards over the last few decades.

7. Anti-nerd culture. Nerds blow, right? Yeah, no doubt. But aren’t the departments of math, physics, and engineering also filled with nerds, who can expect just as much social derision on the basis of their choice? That these fields have seen high growth when CS has not is evidence that people aren’t avoiding all the uncool majors, only the CS one.

8. Skill mismatch and lack of training from startups. This is related but slightly different to my accusation that CS faculty are reactionaries. Perhaps all the professors are too theoretical who would never make it as coders at tech companies. Based on anecdotal evidence, I’ve seen that most startups are hesitant to hire fresh grads, instead they want people to have had some training outside of a college. One also hears that the 10X coders aren’t that eager to train new talent; there isn’t enough incentive for them to.

This is likely a factor, but I don’t think it goes a great length in explaining why so few people commit to majoring in the field. Students see peers getting internships at big tech companies, and they don’t necessarily know that their training is too theoretical. I submit that this realization should not deter; even if students do realize this, they might also know they can patch up their skills by attending a boot camp.

9. Quality gradient. Perhaps students who graduate from one of the top 50 CS departments have an easy time finding a job, but those who graduate from outside that club have a harder time. But this is another one of those explanations that attributes a greater degree of sophistication than the average freshman can be observed to possess. Do students have an acute sense of the quality gradient between the best and the rest? Why is the marginal student not drawn to study CS at a top school, and why would a top student not want to study CS at a non-top school, especially if he or she can find boot camps and MOOCs to bolster learning? I would not glance at what students do and immediately derive that they’re hyperrational creatures. And a look at the growing numbers of visual arts majors offers evidence that many people are not rational about what they should study.

10. Psychological burn from the dotcom bubble. Have people been deeply scarred by the big tech bubble? It bursted in 2001; if CS majors who went through it experienced a long period of difficulty, then it could be the case that they successfully warned off younger people from majoring in it. To prove this, we’d have to see if people who graduated after the bubble did have a hard time, and if college students are generally aware of the difficulties experienced by graduates from previous years.

11. No pipeline issues anymore. In 2014, the number of people majoring in CS surpassed the figure in 2005, the previous peak. In 2015, that figure was higher still. And based on anecdotal evidence, it seems like there are many more people taking CS intro classes than ever before. 2014 corresponds to four years after The Social Network movie came out; that did seem to make people more excited for startups, so perhaps tech wasn’t as central then as it seems now.

I like to think of The Social Network as the Liar’s Poker of the tech world: An intended cautionary tale of an industry that instead hugely glamorized it to the wrong people. The Straussian reading of these two works, of course, is that Liar’s Poker and The Social Network had every intention to glamorize their respective industries; the piously-voiced regrets by their creators are absolutely not to be believed.

Even if the pipeline is bursting today, the puzzle is why high wages and the cultural centrality of Silicon Valley have not drawn in more people in the previous decade. Anyone who offers an argument also has to explain why things are different today than in 2005. Perhaps I’ve overstated how cool tech was before 2010.


A few last thoughts:

If this post is listed on Hacker News, I invite people to comment there or on this post to offer discussion. In general, I would push on people to explain not just what the problems are in the industry, but how they deter college students from pursuing a major in CS. College freshmen aren’t expected to display hyperrationality on campus or for their future. Why should we look for college students to have a keen appreciation of the exponential gradient between different skill levels, or potential physical problems associated with coding, or the lack of training provided by companies to new grads? Remember, college students make irrational choices in major selection all the time. What deters them from studying this exciting, high-wage profession? Why do they go into math, physics, or engineering in higher numbers instead?

I wonder to what extent faculties are too strict with their standards, unwilling to let just anyone enter the field, especially for those who are jobs-minded. Software errors are usually reversible; CS departments aren’t graduating bridge engineers. If we blame faculty, should people be pushing for a radical relaxation/re-orientation of standards in CS departments?

Let’s go to the top end of talent. Another question I think about now: To what extent are developers affected by power law distributions? Is it the case that the top say 25 machine learning engineers in the world as worth as much as the next 300 best machine learning engineers together, who are worth as much as the next best 1500? If this is valid, how should we evaluate the positioning of the largest tech companies?

Perhaps this is a good time to bring up the idea that the tech sector may be smaller than we think. By a generous definition, 20% of the workers in the Bay Area work in tech. Matt Klein at FT Alphaville calculates that the US software sector is big neither in employment nor in value-added terms. Software may be eating the world, but right now it’s either taking small bites, or we’re not able to measure it well.

Finally, a more meditative, grander question from Peter Thiel: “How big is the tech industry? Is it enough to save all Western Civilization? Enough to save the United States? Enough to save the State of California? I think that it’s large enough to bail out the government workers’ unions in the city of San Francisco.”

Thanks to Dave Petersen for offering helpful comments.


Addenda, May 30th:

I’m pleased that this post has generated more discussion here in the comment section, on Hacker News, Reddit, and on Twitter. On Twitter, Evan Soltas pointed to an article from Eric Roberts, a professor at Stanford, discussing this question from a longer term view.

Several people remarked that my rebasing of majors to 2005 is misleading because of the impact of the dotcom bubble. I only gestured at it in the post above, but it probably explains a big chunk of why the number of CS majors hasn’t risen. So here’s a more full chart, from Eric Roberts’ article, with the number of CS majors starting from 1975.

Wow, the number of people graduating with CS degrees is really cyclical. The first peak in 1985 corresponds to the release of the IBM Personal Computer. The second peak corresponds to the 2001 dotcom bubble. I agree now that the ’01 bubble explains a lot of the decline afterwards; people graduated into a bad job market and that scared many students away. That year, however, may have been the worst of it; by 2005, Google had IPO’d, Facebook was spreading on campuses, the iPod was a success, and the iPhone would be released two years later. Those companies drew students back into studying CS, and we can see that from the rise again in 2009.

This is a neat story, but still I have to confess some surprise. Should it take 15 years before the popping of the bubble before we see that college students are graduating with the same degrees again? I guess so, and I’m interested if other industries have experienced a similar lag. Were people entering school say in 2003 acutely aware of how badly fresh graduates were suffering? Were they very well aware of then market conditions, and decided that things were too risky? Why didn’t freshmen/sophomores course correct earlier when they saw that the bubble had bursted?

Elsewhere, Matt Sherman points out this survey from Stack Overflow, which shows that three-quarters of developers have a bachelor’s degree or above, alongside much other interesting data. Alice Maz and an email correspondent remark that people decide on majors because they’re driven by fear of failure, not for high wages; that explains why so many people are fighting for med school spots. I like that Bjoern Michaelsen and commenters below have pointed out that developers suffer from significant skill depreciation and limited job security; I suppose that this is stuff that undergrads are able to intuit. And several people have remarked on mid-aughts fears that all software development would be outsourced to India; I had been unaware of the strength of this fear.

I’d like to finally remark that this could be an interesting project for more serious researchers to pick up. I wrote this out of fun on my leisure time, and invite others to study how cyclical demand for this major is, what the supply constraints are, and the quality gradient between the developers. Someone more serious than me can also discuss how the NCES aggregates different majors in these categories; perhaps a more granular breakdown is more helpful. Wage data especially might be helpful to overlay here. In the meantime, I invite people to keep commenting here.

  • Like my posts? Please consider subscribing.

    I publish something once every few weeks. Enter your email to get my posts delivered to your inbox.

The post Why do so few people major in computer science? appeared first on Dan Wang.

Read the whole story
25 days ago
Share this story

Rich Inner Cities

1 Share
Income map, Philadelphia vs Paris. Green means rich, Red means poor.

Here’s a video that explains why European inner cities have the most expensive homes, while American inner cities are full of slums.

Europe’s cities were built in the Middle Ages, before cars or railroads. Rich people paid for expensive homes in city centers so they could have the shortest walk to work. An urban home will naturally be limited in size, but prior to the Industrial Revolution, it didn’t make sense to have a massive house: It’s hard to keep a home warm and lit with candles and wood charcoal alone.

inner-city Paris

American cities were built in the 19th century, after the invention of railroads and coal energy. Rich people paid for big expensive estates along railroad lines because they could, and train tickets were so expensive that poor people could not afford to commute.

Suburban home along Philadelphia Main Line
Typical commute to work

This configuration remained for centuries, reinforced by the fact that American inner cities have much higher crime rates than European ones.

Oh but hey America’s most expensive cities are turning inside out!

These days, the ability to commute to work is no longer reserved for the elite. In fact, long commutes are for suckers. Plus, it seems that having a gigantic house is overrated. Rich people are having fewer kids, and it doesn’t make sense to maintain a huge estate for two employed adults and a stay-at-home dog.

According to Bloomberg, Americans now want to live downtown. The zoning regulations of dense urban areas have made buildable lots incredibly expensive relative to the construction cost of a new home, thus urban developers focus on building high-end units to maximize their profits. If only rich people can afford to live in city centers, crime rates decrease, and this creates a virtuous cycle that makes Seattle look more and more like Paris.

There’s just one thing though. The opposite seems to be true in metros like Dallas, San Antonio, Phoenix, Atlanta, and pretty much everywhere that isn’t Seattle-San Francisco-New York. But other than those non-entities, wealthy Americans are unilaterally flocking downtown, I guess.

Read the whole story
26 days ago
Share this story

How Markets Work

1 Share

By James Kwak

The Congressional Budget Office’s assessment of the Republican health care plan, as passed in the House, is out. The bottom line is that many more people will lack health coverage than under current law—23 million by 2026—even though the bill allows states to relax the essential health benefits package, which should in theory attract younger, healthier people. This is not a surprise.

I just want to comment on the role of markets in all of this, which I think is not fully understood. For example, the Times article by the very up-to-speed Margot Sanger-Katz explains that the American Health Care Act of 2017 will make markets “dysfunctional.”

Screen Shot 2017-05-25 at 2.29.31 PM

This is consistent with the rosy view that many people, particularly centrist Democrats, have of health care: if we could only get markets to behave properly (correct for market failures, to use the jargon), everything would be great.

But that’s not how markets work.

The CBO report specifically discusses the “stability of the health insurance market,” which they define in these terms: the market is unstable “if, for example, the people who wanted to buy coverage at any offered price would have average health care expenditures so high that offering the insurance would be unprofitable.” In their analysis, instability will result in some states that waive both the essential health benefits package and the prohibition on medical underwriting (charging premiums based on applicants’ health status). In that case, healthy people will choose cheaper policies with less comprehensive benefits. Only sick people will buy more generous policies, which will soon become apparent to insurers, who will raise premiums (to account for sick people’s higher expected health care costs), which will make insurance unaffordable for the people who need it most.

This is all true. But that’s not a dysfunctional market. That’s just a market.

It’s a common characteristic of markets that suppliers offer different products, each designed to meet the needs of a different segment of buyers. In fact, this is generally considered a positive feature of markets. Market completeness—meaning, roughly speaking, that all possible goods and services can be traded—is an assumption of some of the most important theorems in economics.

In the example above, health insurers are offering one low-frills, low-priced product and another gold-plated, high-priced product. In most contexts, we would consider this a good thing. Can you imagine if everyone had to buy the same Toyota Camry? Isn’t it good that people with low incomes can buy a Honda Fit, while those with high incomes can buy a BMW M6? And we don’t worry about the fact that most people can’t afford an M6.

Say you have treatable cancer. Your expected medical costs are $50,000 for the next year. In a functioning market, your health insurance policy should cost about $60,000. (The extra $10,000 covers administrative costs and the cost of capital.) It’s still insurance, because you’re protected against the risk that your medical costs will unexpectedly be $100,000. That’s the right product for you: it’s the one you need, and it’s priced appropriately. That’s what markets are supposed to provide. The world where you can buy an individual policy for $3,000 because the insurer doesn’t know you have cancer, or because Obamacare prohibits the insurer from using that information—you may get treatment in that world, but only because we prevent the market from functioning the way it’s supposed to.

The core problem, of course, is that cancer treatment isn’t like a BMW that can go 150 miles per hour; we’re not willing to call it a luxury that most people can’t afford. Most people can’t afford to pay $60,000 for a health insurance policy. A world in which sick people are priced out of health care is not a world we want to live in. And that’s why markets are the wrong way to distribute essential health care (something I discuss at much greater length in Economism.)

We think all people should get decent care regardless of their income. That means we have to have a basic minimum available to everyone, and people shouldn’t have to pay more for it than they can afford. There’s a name for this system: single payer. We may need a long time to get there. But in the meantime, let’s stop pretending that markets can solve our problems, if only we could somehow make them function properly.


Read the whole story
30 days ago
Share this story

accelerationism and myth-making

1 Share
I've been reading a good bit lately about accelerationism — the belief that to solve our social problems and reach the full potential of humanity we need to accelerate the speed of technological innovation and achievement. Accelerationism is generally associated with techno-libertarians, but there is a left accelerationism also, and you can get a decent idea of the common roots of those movements by reading this fine essay in the Guardian by Andy Beckett. Some other interesting summary accounts include this left-accelerationism manifesto and Sam Frank's anthropological account of life among the "apocalyptic libertarians." Accelerationism is mixed up with AI research and new-reactionary thought and life-extension technologies and transhumanist philosophy — basically, all the elements of the Californian ideology poured into a pressure cooker and heat-bombed for a few decades.

There's a great deal to mull over there, but one of the chief thoughts I take away from my reading is this: the influence of fiction, cinema, and music over all these developments is truly remarkable — or, to put it another way, I'm struck by the extent to which extremely smart and learned people find themselves imaginatively stimulated primarily by their encounters with popular culture. All these interrelated movements seem to be examples of trickle-up rather than trickle-down thinking: from storytellers and mythmakers to formally-credentialed intellectuals. This just gives further impetus to my effort to restock my intellectual toolbox for (especially) theological reflection.

One might take as a summary of what I'm thinking about these days a recent reflection by Warren Ellis, the author of, among many other things, my favorite comic:

Speculative fiction and new forms of art and storytelling and innovations in technology and computing are engaged in the work of mad scientists: testing future ways of living and seeing before they actually arrive. We are the early warning system for the culture. We see the future as a weatherfront, a vast mass of possibilities across the horizon, and since we’re not idiots and therefore will not claim to be able to predict exactly where lightning will strike – we take one or more of those possibilities and play them out in our work, to see what might happen. Imagining them as real things and testing them in the laboratory of our practice — informed by our careful cross-contamination by many and various fields other than our own — to see what these things do.

To work with the nature of the future, in media and in tech and in language, is to embrace being mad scientists, and we might as well get good at it.

We are the early warning system for the culture. Cultural critics, read and heed.
Read the whole story
30 days ago
Share this story
Next Page of Stories