How Many Delegates does Sanders Need on Tuesday?

Tomorrow, Tuesday March 15, like many Tuesdays over the past couple of months, is THE DAY THAT IS GOING TO SETTLE THE PRIMARY ONCE AND FOR ALL!!1!!!!!111!

Which is to say, there are a fair number of delegates at stake, 691 out of the 4051 pledged delegates, over a quarter of the 2724 remaining pledged delegates.

[Aside: I’ll be drawing delegate counts from The Green Papers, confirming with 538’s counts, noting any discrepancies. Also, I’ll be ignoring superdelegates, because their votes are not pledged, and I honestly don’t believe that they are not going to swing for whichever candidate has more pledged delegates at the convention.]

At the moment, the pledged delegate counts are Clinton 775, Sanders 552. So, how many would Sanders need to win in order to be on track to capture 50% of the pledged delegates before the convention?

Sanders needs 1474 of the remaining 2724 delegates, or 54%. So, the simplest calculation would say that he would need to win at least 374 of tomorrow’s delegates.

But that calculation ignores the fact that the states vary in systematic ways. If Sanders had won 51% of the vote in Vermont, for instance, that would not be an indication that we was on track to win 51% of the pledged delegates nationally.

To address this issue, 538 put together their delegate tracker, which attempts to adjust for demographic variation among the states. Their model estimates the number of delegates each candidate would be expected to win in a given state in order to wind up with 50% total. For example, their model projects that Sanders should perform substantially better in Nebraska than in other states. Sanders won 15 delegates there to Clinton’s 10, which is exactly the split given by their model.

That is, to the extent to which their model is accurate, the results from Nebraska point to a very close primary race nationally. (Personally, I’m not sure about their model, as it seems to rely more on conventional wisdom and media narratives than on data — sort of a microcosm of the decline in quality of 538 overall. But, it is probably a decent first-order correction to a simple delegate-count horse race.)

According to the 538 model, Clinton’s target number is 365 delegates, while Sanders’s is 326. The difference comes from Florida and Illinois (and, to a lesser extent, North Carolina), where they expect Clinton to overperform relative to her national standing.

So, for example, if Sanders were to win 330 delegates, Clinton would extend her lead over him, but it would suggest that, if he keeps performing at that level, he will win more than 50% of the remaining delegates.

The problem (and the reason I wrote this) is that the number 326 does not account for the fact that Clinton already has a lead of more than 200 delegates. At this point for Sanders, winning 50% of the remaining delegates means losing the nomination.

So, I’m combining the two calculations — the demographic corrections from 538 and the current delegate totals — to come up with a number that I think represents a reasonable target for tomorrow’s primaries.

There are a couple of different ways to do this. One sets Sanders’s target at 353, and the other at 349. So, something in that vicinity, let’s call it 351. And Clinton’s corresponding target would be 340 delegates.

Of course, regardless of the specific outcome tomorrow, both campaigns will continue on, notwithstanding whatever predictable and idiotic statements come from the media. And once those results are in, I’ll update this calculation for next week’s primaries in Arizona, Idaho, and Utah.

What Hillary Clinton’s AIDS Comments Reveal About Her Worldview

On Friday, while attending Nancy Reagan’s funeral, Hillary Clinton gave an interview to MSNBC in which she made the following statement:

It may be hard for your viewers to remember how difficult it was for people to talk about HIV/AIDS in the 1980s. And because of both President and Mrs. Reagan, in particular, Mrs. Reagan, we started national conversation when before no one would talk about it, no one wanted to do anything about it, and that too is something that really appreciated, with her very effective, low-key advocacy, but it penetrated the public conscience and people began to say ‘Hey, we have to do something about this too.’

This was a very strange and dumb thing to say, even for someone who seems to make as many unforced errors as Clinton does. As social media and news outlets quickly reminded her, the truth of the matter was much closer to the opposite of what she said. The Reagan administration, including Nancy, was legendarily silent on the issue.

The Clinton campaign’s initial apology seemed almost as bad:

Screen Shot 2016-03-13 at 2.14.41 PM

“Misspoke” seemed like a bizarre and dismissive characterization of a full-paragraph of revisionist hagiography that was clearly part of her prepared remarks for the interview, and this terse apology did not do much to stem the criticism.

However, on Saturday, Clinton posted a much longer apology that explicitly denied the credit she had given to the Reagans. And, importantly, she explicitly gave credit to the many, many activists who did start our national conversation about AIDS in spite of the depraved indifference of the Reagan administration.

To be clear, the Reagans did not start a national conversation about HIV and AIDS. That distinction belongs to generations of brave lesbian, gay, bisexual, and transgender people, along with straight allies, who started not just a conversation but a movement that continues to this day.

The AIDS crisis in America began as a quiet, deadly epidemic. Because of discrimination and disregard, it remained that way for far too long. When many in positions of power turned a blind eye, it was groups like ACT UP, Gay Men’s Health Crisis and others that came forward to shatter the silence — because as they reminded us again and again, Silence = Death. They organized and marched, held die-ins on the steps of city halls and vigils in the streets.

One can quibble, of course. While walking back her praise of the Reagans, it ignores the gut-churning cruelty that characterized much of the administration’s response. And there’s the fact that much of the rest of her statement is basically about how she, Hillary Clinton, is the actual hero of the story. But, it was a political funeral in the middle of an election, so those omissions and that spin are not surprising. And, as far as apologies from politicians go, this was was really pretty good.

So, my anger has subsided somewhat, but I have continued to be puzzled as to why she possibly made this statement in the first place. The theory that makes the most sense to me, as bizarre as it is, is that this was actually Hillary Clinton’s perception of the events of the 1980s.

Garance Franke-Ruta (storified here) makes this argument:

Screen Shot 2016-03-13 at 6.53.56 PM

(And a bunch more interesting points. Well worth reading, and clicking through the links.)

There was also this article, published in the Advocate on March 6, shortly after Nancy Reagan’s death, and several days before Clinton’s comments. The article presents the Reagans’ relationship with AIDS in the most generous possible light, with several passages of this flavor:

Nancy Reagan is sometimes credited with pushing her husband to do something about AIDS, and he eventually supported some funding for research. The death of their friend, actor Rock Hudson, is often referred to as a pivotal moment.

So there is a very specific perspective from which Clinton’s original statement can be seen as, well, sort of true. It’s sort of a Great Man Theory perspective. Sure, there were lots of things happening, people saying things, protesting, and so on, but the important part of the history is what happened within the walls of power. If by “national conversation” you mean “conversation among the nation’s elite”, and if by “the public conscience” you mean “the public consciousness”, and by “the public consciousness” you mean “the consciousness of the political establishment”, maybe Nancy Reagan was a key driving force.

I suspect that this fundamentally oligarchical worldview is behind a lot of Clinton’s political missteps. When she brags about being praised by Henry Kissinger, she seems genuinely surprised that there are people who don’t find that to be a compelling reason to vote for her. And it helps to explain her response to the protests that led Donald Trump to cancel his rally in Chicago on Friday:

Screen Shot 2016-03-13 at 9.11.57 PM

Her message seems to criticize the protesters as much as Trump’s rhetoric — a profoundly authoritarian stance that seems natural if you assume that politics should be a conversation among a very limited set of elites, and that all the little people just need to be more polite and deferential.

Fundamentally, to me, Hillary Clinton acts less like someone running for President, and more like someone applying for a job as Head Animal Control Officer. She has mustered the support of the town council, and she has a letter of recommendation from the Chief of Police. But she can’t for the life of her understand why all the dogs in the pound keep interrupting her, acting as if they should have a say in the decision.

Don’t get me wrong. I suspect that she would be a relatively benevolent dog catcher. Compare Donald Trump, who is campaigning on a promise that he will euthanize all the dogs and turn them into a plentiful supply of cat food.

Her second apology for her bizarre statements about Nancy Reagan was a huge improvement. It was as if, when sufficient pressure was placed on her, the hundreds of millions of people who are not part of the political, financial, and media elite came momentarily into focus for her. It was disappointing, however, that her ability to acknowledge the courage and importance of regular Americans did not even persist to the end of the statement.

Balter Provides Some Background on Why Science Magazine Fired Him

Yesterday we learned that Michael Balter had been fired by Science magazine, and that it had something to do with his article last month on sexual harassment in academia. Today, he has published his promised blog post in which he has provided some additional background.

Based on the additional details he provides, it sounds like it was a combination of a couple of things.

First, a historical pattern of not being sufficiently deferential to the higher-ups. Particularly troubling was this tidbit:

I’ve already talked above about the culture at AAAS that allowed four colleagues to be fired precipitously in 2014, and will not elaborate on that here–except to say that just as I was beginning the Brian Richmond investigation, one of my editors asked me to delete a key blog post about that episode in which I criticized our Editor-in-Chief Marcia McNutt for parroting the party line put out by former AAAS CEO Alan Leshner. I declined to engage in this sanitizing of the historical record, not least because I consider that episode to be one of the proudest moments of my life. It’s not often that one gets to put one’s career on the line for something one believes in, and I have no regrets.

Second, it sounds like the editors, or at least some of them, were never fully on board:

But it is important to note that Science did not jump on the story when we first found out about the allegations concerning Richmond last August. There was discussion about whether we should focus on this one person, about whether Richmond and his alleged actions were important enough to write a story about, and related issues. I don’t think my editors will contest the fact that I pushed the hardest for us to do a story; but even after the Geoff Marcy sexual harassment case broke at Berkeley, and the astronomer was forced to resign, there was still a great deal of ambivalence about whether the Richmond case was newsworthy.

Balter seems to suggest that Science‘s reluctance was motivated primarily by an excess of caution — fear of lawsuits, and I’m sure that was part of the story.

But it is also important to keep in mind that Science is one of the most prominent mouthpieces of the scientific establishment. That’s one of the things that made the original article so powerful and important.

That’s not to say that the scientific establishment is pro-sexual harassment per se. But, the fact is that power, including sexual power, over young people has long been one of the implicit perks of success in academia. Some people exploit that power, and some don’t, but giving away power is rarely a high priority.

I’m not arguing for a conspiracy here. It’s just that the people closely associated with a publication like Science, whether as editors, or publishers, or authors, or journalists, are people who have risen to the top in the current system — often with good cause. But it is natural for them to be wary of things that challenge the status quo.

Natural, just not admirable.

As Balter notes, it will be interesting to hear what, if anything, Science says publicly about this. In the meantime, the good news is that there’s an excellent science journalist out there with some time on his hands. You should hire him.

Update: AAAS has issued this statement:

Michael Balter was provided notice on March 10, 2016 that his contract as a freelance writer for Science magazine was being discontinued. Mr. Balter has written many stories for Science‘s news section, including one published February 9, 2016 on a sexual misconduct case.

Science editors stand by the February 9, 2016 story as published. The goal of editing was to ensure that the story was both powerful and fair.

AAAS remains committed to providing leadership on stopping sexual harassment in science and empowering women in STEM fields.

Which, you know, okay.

Science Magazine Fires Michael Balter, Who Wrote That Sexual Misconduct Article

About a month ago, Science Magazine published an excellent long article on a sexual misconduct case involving Brian Richmond, the Curator of Human Origins at the American Museum of Natural History. The article framed the case in the context of the recent rash of high-profile misconduct cases at top universities and the culture of harassment throughout academia.

[Aside: If you’re an academic who is struggling to figure out when your behavior does or does not constitute harassment, I wrote this handy guide for you.]

It’s an infuriating issue, in part because it is typically so difficult to convince people in positions of authority to take it seriously. So, this very serious treatment in one of the flagship science journals seemed like a promising development, maybe even an indication that we — the academic community — were turning a corner of sorts.

Then, today, Michael Balter, the author of that article, announced on twitter that Science had fired him.

Screen Shot 2016-03-10 at 5.50.06 PM

Balter has promised a blog post to explain the details, but it sounds like yes, it was related to that article. Specifically, he says that his firing stemmed from conflicts in the run-up to the publication of the article, where he pushed back hard against the editors in order to not “water down” the article.

Update: Balter’s blog post is now available, as is my follow up.

Screen Shot 2016-03-10 at 5.57.01 PM

Hard to imagine what Science was thinking here. The only two scenarios I can make work in my head are 1) he really pissed the editors off, and was basically fired for insubordination, or 2) Science has been getting flack from somewhere, and had to appease someone. Presumably someone who does not

This little tidbit makes number 2 seem more likely

Screen Shot 2016-03-10 at 6.06.40 PM

Very much looking forward to that explanatory blog post — as well as whatever Science has to say for themselves.

What if the GOP Allocated Used Proportional Allocation of Delegates?

In the Republican presidential primary system, different states apportion their delegates among the candidates in a variety of ways. In the upcoming contests in Florida and Ohio, all of the state’s delegates are pledged to the candidate who receives the most votes state-wide. In Nevada, delegates are allocated proportional to the vote total (you get one of the 30 delegates for each 3.33% of the vote you get).

But a lot of the states are much more complicated. In South Carolina, three delegates are assigned to the leading vote-getter in each of the seven congressional districts, and the remaining 29 go to the winner statewide. Trump won all 50 this year by leading the pack in each congressional district.

A number of the states impose a minimum threshold. For example, Massachusetts and Kentucky do proportional allocation of their 42 and 46 delegates, respectively, among all candidates receiving at least 5% of the state-wide vote.

Several states add a winner-take-all threshold. Vermont does proportional allocation of its delegates among candidates who receive at least 20% of the state-wide vote. But if any candidate gets more than 50%, they receive all 16.

In general, the effect of these rules is to push delegates towards the winning candidates — and drive inviable candidates out of the race. This happens gently at first, as many of the early contests are more proportional. Then, as the season progresses, things take on more of a winner-take-all flavor.

Here’s what the allocation bias looks like in the Republican primaries and caucuses that have taken place through March 8 (data from The Green Papers):


The brownish-orangish line is what you expect from strictly proportional allocation of delegates, and the points plotted indicate allocations from individual state-wide (and Puerto Rico-wide) contests. Red is Trump, Blue is Cruz, Purple is Rubio, and Black is Kasich.

One thing you can see from the plot is that Trump seems to be the greatest beneficiary of the current allocation system. This makes sense, of course, since he’s the frontrunner, and the system is basically designed to drive a consensus around the frontrunner. And, if the frontrunner were anyone other than Donald Trump, the Republican leadership would probably be very pleased with how it was working.

As a little thought experiment, here’s what the current delegate score would look like if all of the Republican primaries and caucuses used proportional allocation without a viability threshold (besides whatever minimum percentage is required to get one delegate):

Screen Shot 2016-03-09 at 6.02.42 PM

That’s not to say that things would have worked out this way under that allocation scheme, since a different scheme would have led to different reporting, different campaign strategies, and so on. But, it’s a nice simple way to quantify the effect of structural properties of the primary system on the outcome.

In that spirit, what this tells us is that about a fifth of Trump’s delegate total — and about half his lead over Cruz — can be chalked up to Republican delegate allocation math.

We could ask the same question for the Democrats, but it is not nearly as interesting. All of the Democratic contests follow the same formula: proportional allocation of delegates among candidates exceeding 15% of the vote. About a third of the delegates come from applying this formula to the state-wide vote, and about two thirds from applying it individually to each congressional district.

That system also punishes low-performing candidates, but it does not reward high-performing ones in the same way. There are no winner-take-all states or triggers (unless you win more than 85% of the vote, guaranteeing that no one else reaches 15%).

So, the Democratic system leans a bit more towards proportional overall. But much more important is the fact that there are only two competitive candidates, both of whom are rarely in danger of failing to meet that 15% threshold.


Red points represent Clinton, and Blue represent Sanders. The only real outliers are Vermont, where Sanders got 86% of the vote, and Mississippi, where Clinton topped 85% in two of the state’s four congressional districts.

I’m not advocating for any particular delegate allocation scheme here. We know there’s no perfect voting system. I just hope to contribute in my own small way to the enormous pile of regrets plaguing Republican party leaders as Trump sits atop his throne of skulls forcing them to fight to the death.

Looks Like PLOS ONE Screwed Up the “Creator” Retraction, Too

Okay, that “Creator” paper has officially been retracted by PLOS ONE (previously, and here). Based on what we now know, that looks like the wrong decision — at once unfair to the authors and completely failing to address the actual issue.

When PLOS ONE first announced its intention to retract the article, they stated that “the peer review process did not adequately evaluate several aspects of the work”, which makes it sound like they found problems other than inclusion of the “Creator” language that meant it should not have been published. Now that the formal retraction has happened, here’s the official statement:

Upon receiving these concerns, the PLOS ONE editors have carried out an evaluation of the manuscript and the pre-publication process, and they sought further advice on the work from experts in the editorial board. This evaluation confirmed concerns with the scientific rationale, presentation and language, which were not adequately addressed during peer review.

Consequently, the PLOS ONE editors consider that the work cannot be relied upon and retract this publication.

The editors apologize to readers for the inappropriate language in the article and the errors during the evaluation process.

This is infuriatingly vague, but it makes it sound as if the primary issue was the “Creator” language. The authors have insisted that this was a translation problem. In the context of the rest of the paper, that seems entirely plausible to me. In support of this explanation, check out this comment from over at Complex Roots (spelling corrected):

I am so surprised that so many people assert that there is no way a translation error though they don’t speak any Chinese.

In fact, there is special phrase in Chinese, which is “zao wu zhe”. If we translate it literally and directly into English, it is “the one who creates” or ‘creator’. Ancient Chinese people use it a lot in poems, way long before Christian is introduced in China. The meaning is same as “nature” because they believe that nature ‘creates’ everything, not a special man, or a God. There is a sentence in a poem written in Song Dynasty (more than 1000 years ago) by Su Shi, which saying that ‘we can enjoy the the breeze of the river, the moon between the mountain; this is the inexhaustible treasure that the creator have, and all of us can appreciate them together’. So here ‘creator’ means nature. (poem link:

Or you can use google translator to check this page (a Chinese dictionary):
It will tell you that ‘zao wu zhe’, which means who created all things. It refers to nature.

However, in English, Creator is epithet of God because people firstly say it believe God creates everything. That’s the difference. The author used capitalized ‘Creator’ because he thought that the underling meaning of this idiom in Chinese and English is same.

Unless there were technical issues with the science, the authors should have been given the opportunity to edit the paper to correct the offending language.

As I argued previously, the fact that this error slipped through is troubling, not because it plays into some creationist agenda, but because it reveals a review and editorial process that involved absolutely no care or effort.

Now, it seems that PLOS has responded to the twitter/comment outrage by throwing the authors under the bus, while giving no reason to believe that any other manuscripts, present or future, are going to receive any more care and attention than this one did.

“Creator” Paper Retracted at PLOS One

Well, true to their word, the editorial staff at PLOS ONE acted quickly to review that paper from January that interpreted their study of biomechanical characteristics of hand coordination as evidence of “proper design by the Creator”. (Look here for background.) They issued this statement today:

The PLOS ONE editors have followed up on the concerns raised about this publication. We have completed an evaluation of the history of the submission and received advice from two experts in our editorial board. Our internal review and the advice we have received have confirmed the concerns about the article and revealed that the peer review process did not adequately evaluate several aspects of the work.

In light of the concerns identified, the PLOS ONE editors have decided to retract the article, the retraction is being processed and will be posted as soon as possible. We apologize for the errors and oversight leading to the publication of this paper.

The paper’s first author, Ming-Jin Liu, has posted multiple comments asserting that there was no creationist agenda, and that this was simply an issue of non-native English speakers misunderstanding the implications of using “the Creator” when they had meant “natural selection”.

Personally, I’m inclined to believe this explanation, and if this were the only problem with the paper, I would let them make a correction. If, in each of the three places where the Creator is credited, the authors were to cite their findings as “evidence of exquisite adaptation” or some such thing, the meaning would be largely unchanged, and no eyebrows would be raised.

Here’s the thing, though: at this point, I have no confidence that there is not something else dreadfully wrong with the paper. Including three references to “the Creator” — one in the abstract — raises such an obvious red flags that even a cursory read should have identified this as a problem. The capital C makes the word jump out if you even scan the abstract.

I think I would feel the same way if the paper were littered with errors involving there, their, and they’re: it’s a mistake a non-native speaker could make, and it would not make the science wrong. But the only way those errors make it all the way through to publication is if multiple people fail to do their jobs.

So what this says to me is that none of the people involved in the editorial and review process put in even a modest effort. I don’t know if there are major, even fatal, technical flaws with the paper. However, I am confident that if there are major flaws, the careless review process applied to this paper would never have identified them.

The question, then, is how much of an outlier this was. Can we trust that the rest of the articles at PLOS ONE are actually going through a legitimate review process (as imperfect as that is under the best of circumstances)? Or should we assume it has slid into the predatory open-access model of publishing?

In short, I don’t really care whether or not this particular paper is retracted. I do care whether or not PLOS can do something to shore up its review process.

Or is this another piece of evidence in favor of post-publication peer review? It is certainly true that an advantage of that model is that avoids creating a false sense of authority.

One bright side of the controversy is that it provides an excuse to revisit this piece of awesomeness from the New York Dolls:

“Mystery of the Creator’s Invention” at PLOS One


PLOS One published a paper in January with the title “Biomechanical Characteristics of Hand Coordination in Grasping Activities of Daily Living“. And the abstract contains this line:

The explicit functional link indicates that the biomechanical characteristic of tendinous connective architecture between muscles and articulations is the proper design by the Creator to perform a multitude of daily tasks in a comfortable way.

The main text contains two more references to “the Creator”. The Introduction notes that

Hand coordination should indicate the mystery of the Creator’s invention.

And the end of the Discussion:

In conclusion, our study can improve the understanding of the human hand and confirm that the mechanical architecture is the proper design by the Creator for dexterous performance of numerous functions following the evolutionary remodeling of the ancestral hand for millions of years.

Today, someone seems to have noticed this “Creator” stuff and brought it to the attention of the journal, which issued a statement that they’re looking into it.

So what happened?

This does not read to me like it is part of some sort of conspiracy to infiltrate the biology literature with intelligent design propaganda. However, it is a good illustration of the issue with the PLOS One model as it is implemented in practice.

Screen Shot 2016-03-02 at 3.52.50 PM

PLOS One is based on an interesting idea. The papers are peer-reviewed, but evaluation is explicitly supposed to focus on technical accuracy, ignoring “impact”. This was a brilliant idea. Historically limited space (in print) and a pathological pursuit of citation metrics means that lots of good science has a hard time getting published, either because it is not flashy enough, or because journals are reluctant to publish things that do not fit squarely in the domain of what they imagine their readers’ interests to be.

In a sense, PLOS One aims to split the difference between traditional publishing and the preprint / post-pub-peer-review model. It is someplace where you can publish interdisciplinary work, weird little fun studies, negative results, etc. But in principle you get the benefit of knowing that the work itself has been vetted.

Or at least as vetted as you ever get with peer review, which is to say, imperfectly and highly variably.

As a result, PLOS One publishes lots of cool stuff, and it provides a valuable service to the community. But when something like this happens, it makes it seem like the editorial policy in practice is something more along the lines of “just make sure the check clears”.

How Should Democrats Allocate Primary Delegates Among States?

The US Presidential election system is weird, but the primary system is really weird. Contests are strung out over the course of five months, with the rules varying from state to state — from who can vote to how the voting happens to how the delegates are allocated among the candidates.

For the Democratic Party, there are 4051 “pledged” delegates, whose first-ballot votes at the convention are determined by the results of state-wide (or state-equivalent-wide) primaries, and 714 “superdelegates”, party leaders and insiders of various sorts who can vote for whomever they want.

Allocation of Delegates

The exact number of delegates assigned to a given jurisdiction is determined in a number of steps (details here). First, each jurisdiction is assigned a base number of delegate votes. For the 50 states and DC, this base is 3200 time an “allocation factor” that is the average of two quantities. Half of the allocation factor is set by the fraction of electoral college votes for the jurisdiction (e.g., 3/538 for DC and 29/538 for Florida). The other half is the fraction of the nationwide popular vote for the Democratic presidential candidate that came from that state over the previous three elections.

That second factor does two things. First, it makes the delegate allocation more proportional to population — reducing the advantage given by the electoral college system to smaller states. Second, it rewards states that tend to vote Democratic.


In this figure, blue dots indicate states that have gone for the Democratic candidate in each of the three previous elections, while red dots have gone for the Republican candidate in all three. Gray dots are states that have voted for the Democrat in either one or two of the three most recent elections. You can see that the slope of the red points is lower than that of the blue points.

You can also see how this scheme down-weights the electoral value of the small states, since the apparent y-intercept is below zero. We can see this shift more clearly if we replot this in terms of the number of delegates per electoral vote:


Note that the apparent purple point is actually an overlay of red and blue (Montana and Vermont).

Then, each state is given various bonuses based on when they hold their primary. For example, you get a larger bonus for holding your primary later in the season. And, for primaries held March 22 or later, you get a 15% bonus if you are part of a cluster of three or more neighboring states with primaries on the same day.

(Because in Democratic primaries, as in the Special Olympics, everyone is a winner, no state has an overall bonus of less than 15%.)

Delegates are also assigned to various jurisdictions that don’t actually get to vote in the presidential election, like Puerto Rico, and to “Democrats Abroad”, people living overseas, who would vote in the presidential election via absentee ballot in their home state.

How SHOULD delegates be allocated?

So, is this a sensible way to allocate delegates? It depends on the goal. The current system seems to be basically a hybrid of the electoral college system and a popular vote, with some additional features to reward party loyalty. It seems that the system aims to strike a balance among three competing goals:

  1. Pragmatic considerations of electoral math. Elections are determined by the electoral college, so a system that mirrors the electoral college seems more likely to produce a candidate who can win.
  2. A democratizing impulse. At the same time, there is a sense that the electoral college system is weird and not always fair. Skewing delegate weights towards population size, allowing proportional allocation of delegates within states, and allowing participation by groups that are normally excluded from the Presidential election all make the primary outcome a bit more like a nationwide popular vote.
  3. Community Building. If you reward states that produce Democrats and get them to vote, as well as states that get in line and follow the rules, you presumably hope that this will lead to more of both.

However, if we take the pragmatic angle seriously, there is something missing from this calculation. In the current political environment, the outcome of a Presidential election depends primarily on how candidates perform in the swing states.

Barring a landslide, come November, the Democrats are going to win Washington DC and Massachusetts, and the Republicans are going to win Wyoming and Mississippi. If the goal is to nominate a candidate who can prevent a dystopian Trump presidency, Clinton’s win in Alabama and Sanders’s win in Vermont are irrelevant. The primary results we should pay attention to are those from states that could determine the election outcome. Fare more important are the close results in Iowa and Nevada, Sanders’s victory in Colorado, and Clinton’s in Virginia.

So, the other thing you could include in your allocation factor is a measure of “swinginess”. There are a lot of ways you could do this, but here’s a simple one: calculate the mean and standard error of the difference between Republican and Democratic vote percentages in each state over the past three elections. Assuming those values are Normally distributed, calculate the probability that the winner is different from what we expect from the mean. So, if the mean difference is exactly 0%, the probability would be 0.5. If the mean difference is 5%, and the standard deviation is also 5%, the probability would be about 0.17. If the mean difference is 30%, and the standard deviation is 5%, the probability is effectively zero.

Take this probability and multiply it by the number of electoral votes. The result is something like the number of electoral votes you can expect to get by doing well in that state.

What remains, then, is how to combine this factor with the other considerations. For example, if we use an Allocation Value that is 90% the existing formula, and 10% Swinginess, we get the following:

Screen Shot 2016-03-02 at 1.35.04 PM

If we use these numbers to replot our graph of base delegates per electoral vote versus electoral votes, you can see how the states that have been close in the past get bumped up.


Those four highest points, from left to right, are Colorado, Virginia, Ohio, and Florida.

Is this fair? I’m not actually sure what that means in this context. Keep in mind that nearly 1/3 of the delegates exist either a) to directly express the will of the party elite, or b) to allow the national party to manipulate how and when the states hold their primaries.

A better question might be, would it work? Also, what other consequences might result? It seems intuitive that, for the Presidential election itself, a candidate’s ability to carry Ohio is more important than how much of a landslide they could rack up in California. On the other hand, winning by more (or losing by less) in non-competitive states could make a difference in down-ticket races. And if you discount the voters in solid-blue states too much, you risk alienating your base.

All in all, I suspect something along these lines would be an improvement. At a minimum, it might be a useful way to factor in “electability”, particularly in election years that are more likely to be decided by base turnout than by swaying independent voters.

Don’t Forget Ben Carson, Who is Also Wrong About the Supreme Court

In the wake of Supreme Court Justice Antonin Scalia’s death, Republicans have been climbing all over each other like a less well intentioned pile of zombies in an effort to most loudly claim that President Obama has no right to appoint his successor. Most of the arguments have focused on the fact that we have now entered the final year of Obama’s presidency. As you will recall, back in 2012, the ballots for president clearly stated that the results would only be construed as representing the will of the people for the next three years.

Obviously, these arguments fail any non-disingenuous reading of the constitutional and historical evidence (and contradict arguments previously made by many of those same Republicans), but, you know, the constitution, like the bible, is sacred, infallible, and beyond scrutiny — except when it turns out to be politically inconvenient.

But at the most recent Republican debate, everyone’s favorite cingulocidal maniac Ben Carson offered a different argument:

Well, the current constitution actually doesn’t address that particular situation, but the fact of the matter is the Supreme Court, obviously, is a very important part of our governmental system. And, when our constitution was put in place, the average age of death was under 50, and therefore the whole concept of lifetime appointments for Supreme Court judges, and federal judges was not considered to be a big deal.

Carson is correct that the “average age of death” used to be under 50. In fact, it did not exceed 50 until sometime in the early 20th century. However, as anyone with any educational background in public health or medicine might be expected to know, the dramatic gains in life expectancy have come mostly from reductions in early-life mortality, due to things like sewers, vaccines, and antibiotics. So, unless the Founding Fathers were appointing toddlers to the Supreme Court (spoiler: they weren’t), life expectancy at birth is pretty irrelevant. Here are a couple of graphs (generated here):

Screen Shot 2016-02-17 at 1.46.20 PMScreen Shot 2016-02-17 at 1.47.13 PM

The gray line is life expectancy at birth from 1850 to 2000. The orange and red lines are life expectancy from age 60 for women and men, respectively. Since people are not typically appointed to the supreme court until they are in their 50s, this is actually the relevant data.

So, it is true that someone appointed to the supreme court today might be expected to live longer on average than someone appointed in the 19th century, but only by about ten years. But does that mean that justices given lifetime appointments to the supreme court serve longer than they used to? Not so much. Here are a couple more graphs, constructed from this data:


In the top panel, each diagonal line indicates the term of a single Supreme Court Justice, running from the date and age of appointment to age date and age of death or retirement. The black lines are justices who died in office, red lines are justices who resigned or retired, and blue lines are the eight justices currently serving.

In the bottom panel, each dot represents a single justice. “Mid-Term Year” is the halfway point of their tenure (middle of the line in the top graph), and duration is how long they served (length of the line in the top graph. The line is a ten-point moving average. Current justices are not included.

Notice that justices were not often dying by age 50, even in the early days. There are a couple of interesting trends, though.

First, there’s a transition as we get into the 20th century, when it becomes much more common for justices to retire, rather than die in office. So, while the upper limit on the age we might expect a justice to live to might have increased by about ten years, the upper limit on the age at which they leave the court has not changed substantially in 200 years.

Second, after an initial shake-out (during which many of the justices did not have any sort of legal credentials), the long-term trend from 1820 to 1950 is towards shorter average term lengths (declining from around 20 to around 15 years). Starting with the second half of the 20th century, the trend has been towards longer tenures, with a recent average closer to 25 years. However, if you look at the scatter plot, you can see that this increase is mostly due to the absence of any short-term justices since 1970.

So, it is true that we should probably expect that the next person appointed to the Supreme Court will be there for the next twenty to thirty years, but terms of that length have been around since the beginning.

Science, Poetry, and Current Events, where "Current" and "Events" are Broadly Construed