Category Archives: science

More on Nowak et al at the Chronicle

So, an article has just come out this morning in the Chronicle of Higher Education covering the controversy over the Nowak et al Nature paper attacking kin selection. I’ve written about the paper twice previously, once here, providing an xtranormal video dramatization of the issues, and once here, trying to provide some context to explain why so many people had gotten up in arms about this particular paper (as opposed to the hundreds of scientific papers published every year that are equally wrong).

Unfortunately, the article is behind the Chronicle’s paywall, so you may not be able to read it. (I don’t know if they permit the same sorts of work-arounds that the New York Times does.)

The thing that most strikes me in the article comes at the end:

Right now Mr. Nowak is working to understand the mathematics of cancer; previously, he has outlined the mathematics of viruses. It falls within his career mission to “provide a mathematical description where there is none,” he says, a goal at once modest and lofty. He would also like to write a book on the inter­section of religion and science, a publication that would no doubt further endear him to atheists.

He knows that the debate on kin selection is far from over, though he sees the ad hominem attacks as a good sign. “If the argument is now on this level,” he says, “I have won.”

along with this comment from Smayersu

Science is written in the language of mathematics. Why is it that the biologists cry “foul” when the mathematicians and physicists investigate the theory of evolution? The biological community should welcome the help of those who are trained to examine problems from a rigorous mathematical perspective.

Two things.

First, the criticism of Nowak had nothing to do with his providing a mathematical framework. In fact, most of the people who have criticized Nowak are, themselves, mathematical biologists. The issue is that the paper discounts and misrepresents a huge body of mathematical work. In fact, while Nowak has written a number of interesting and original papers, he has also written a number of papers in which he claims to “provide a mathematical description where there is none,” the problem being that in many cases, there actually is a mathematical description. Often quite an old one.

It is as if I were to write a paper that said, “You know who was wrong? Albert Einstein! Because, look, Special Relativity does not work when you incorporate gravity. So I’ve created a new thing that I call “Generalized Relativity.”

Second, it is absolutely true that ad hominem attacks do not constitute legitimate scientific criticism. However, the fact that some of the attacks on Nowak have been ad hominem certainly does not constitute evidence that he is right.

To my mind, the relevance of the ad hominem attacks is this. They reflect a deep sense of frustration on the part of the field towards Nowak and his career success. Nowak has repeatedly violated one of the basic principles of academic scholarship: that you give appropriate credit to previous work. And yet, the academic system has consistently rewarded him over other researchers who put more effort into making sure that they are doing original work and into making sure to credit their colleagues.

It is as if, after publishing my paper on Generalized Relativity, I were to be awarded tens of millions of dollars in grant money and a chair at Harvard, while the legions of physicists pointing out Einstein’s later work were ignored. I’m guessing that I might find myself the subject of some ad hominem attacks, but it would not mean that I was right.

As a colleague of mine commented this morning, “ah, Nowak thinks he’s won because of the ad hominem attacks. by that standard, Donald Trump must be a serious presidential candidate.”

Nowak, M., Tarnita, C., & Wilson, E. (2010). The evolution of eusociality Nature, 466 (7310), 1057-1062 DOI: 10.1038/nature09205

Study sheds light on coming robot apocalypse

So, in many of the standard narratives, the robot apocalypse is triggered when the machines figure out that humans are fundamentally flawed, or because their self awareness produces an instinct for self defense.

Well, a new paper just out in Biological Psychiatry describes an experiment in which researchers successfully teach a computer to reproduce aspects of schizophrenia. This raises the possibility of an alternative scenario: the machines just go crazy and start killing people, Loughner-style.

After suffering from paranoid delusions, Skynet sends Vernon Presley back in time to kill his own grandfather, or something.

Actually, the paper reports a study in which a computational (neural network) model is used to examine eight different putative mechanistic causes of a particular set of symptoms often seen in schizophrenia: narrative breakdown, including the confusion of autobiographical and non-autobiographical stories. This models one putative source of self-referential delusions.

The basic setup is that the researchers use an established system of neural networks called DISCERN. The system is trained on a set of 28 stories. Once the system is trained, you can feed it the first part of any one of the stories, and it will regurgitate the rest of the appropriate story.

Half of the stories are autobiographical, everyday stuff like going to the store. The other half are crime stories, featuring police, mafia, etc.

The experiment is to mess with the DISCERN network in one of eight different ways. Each of the eight types of perturbation is meant to instantiate a neural mechanism that has been proposed to cause delusions in schizophrenia. Then, the researchers feed the computer the first line of a story and look at the magnitude and nature of the errors in the output.

Models were evaluated by their ability to reproduce errors seen in an experimental group of subjects diagnosed with schizophrenia. Basically, they are interested in finding perturbations that mix up different stories, so that the “I” of the autobiographical stories becomes associated with the gangsters and police in the non-autobiographical crime stories.

Two of the eight perturbations performed significantly better than the others:

  1. Working memory disconnection: Connections within the neural network that fell below a certain threshold strength were discarded.
  2. Hyperlearning: During the backpropagation part of the neural network training, the learning algorithm overreacts to prediction errors. After DISCERN was trained, hyper-trained for an additional 500 cycles.
These two were then further extended, with the addition of a parameter to each, at which point the modified hyperlearning model outperformed the disconnection model. 
So, what to make of it?  It seems like an interesting piece of work. It is hard to know how much light this sheds on schizophrenia, since the brain is a heck of a lot bigger and more complicated than this model. And, well, sometimes things scale up in the straightforward way, and sometimes they don’t. 
What one hopes will be the outcome of this sort of work is that is will prompt additional research. While we can’t guarantee that results extrapolated from computational systems such as this one will have any predictive value for the brain. But, it should be possible at least to construct predictions. A collaboration involving neuroscientists of various stripes could then potentially come up with some clever experiments, which would be interesting, if for no other reason than that they had a direct connection back to this sort of computational model.
Spot-on commentary. Via, as usual, xkcd.
The other thing we can take away is this. We now know how to train up a schizophrenic neural network. Combine it with this punching robot:
Asimov’s-law-violating robot. Via Geekologie.

Teach it to snort coke, and we’ve got all the makings of a Charlie Sheen bot.

Hoffman RE, Grasemann U, Gueorguieva R, Quinlan D, Lane D, & Miikkulainen R (2011). Using computational patients to evaluate illness mechanisms in schizophrenia. Biological psychiatry, 69 (10), 997-1005 PMID: 21397213

For more on this article, check out 80beats, over at Discover Blogs.

Field Notes on Science and Nature

So, I wanted to draw your attention to a new book called Field Notes on Science and Nature, just published by Harvard University Press.  The book is the result of several years of work by Michael Canfield, who is not only a personal friend from graduate school, but also a really smart and genuinely nice guy.
The book is an exploration of the similarities and differences in how laboratory and field scientists collect and use notes.  It includes excerpts from the notes of a range of researchers, along with essays about how they use those notes.  I haven’t seen the actual book yet, but the website promises a lot of cool eye candy, like this
and this
Although the book has already been printed, Mike is interested in continuing to explore these issues on the web. He is particularly interested in questions of how current grad students and researchers navigate between pen-and-paper notes and other types of technology.
If you or someone you know (a student, a colleague, a frenemy, etc.) might be willing to share some of your notes and experience, you (or they) should drop Mike an e-mail (canfield@fas.harvard.edu) and join in the conversation.
And finally: congratulations, Mike!  I can’t wait to get my hands on a copy of the book!

What would you do about an old bridge?

So, what would you do if faced with an old-style concrete bridge that does not really have the strength that it should?

One solution has been proposed in a PhD dissertation by Gun Up Kwon of the University of Texas. Of course, this post is only marginally about that, but there you go.

These full-size episodes tend to come out poorly on the blog, so if you want to be able to read it more easily, I recommend heading over to Darwin Eats Cake.

Enjoy.

Best URL for sharing: http://www.darwineatscake.com/?id=23
URL for hotlinking or embedding: http://www.darwineatscake.com/img/comic/23.jpg

Gun Up Kwon (2008). Strengthening existing steel bridge girders by the use of post-installed shear connectors PhD Dissertation, University of Texas at Austin Other: http://gradworks.umi.com/33/41/3341639.html

WTF, 1942? Bugs Bunny dons blackface to sell war bonds.

So, one of the features of studying things like biological species or languages, is that they’re not really things. Or rather, they are things, but in a fuzzy, not-very-thingy kind of way.

What I mean is that it is often difficult to define the exact boundaries of a species or language. Fundamentally, this is a consequence of the fact that we are trying to apply discrete labels (such as “English” or “Moloch horridus“) to populations of things (speakers or individuals) that exhibit a degree of variation (e.g., dialects or subspecies), and that change over time.

For example, I can easily read a newspaper article written in the 1950s. I can read something from the 1700s and understand it, but it might sound weird. I can read Shakespeare and understand it, but I probably make use of a lot of the footnotes. By the time I’m reading Chaucer, some things might look familiar, but I probably require help to correctly understand most of the words. So, while those texts are all, in a sense, English, the gradual process of change means that the English of 800 or 1000 years ago is as foreign to me as contemporary French or German.

The same is true of biological species. In that context, people sometimes refer to “diachronic species,” which is a way of breaking up a single, continuous biological lineage into subsections that can be given different labels. Given enough knowledge of the biology, one could use not-completely-arbitrary criteria to decide whether two individuals in the same lineage (say, where one was a distant ancestor of the other) should be classified as members of the same species. However, defining break points along the lineage to define species is an inherently arbitrary exercise.

This change process is also true of other (non-linguistic) aspects of culture. There is clearly a continuum of American culture stretching back from the present into the past. And each additional year that we move back, the more the culture seems foreign to me. But how far back is far enough to where you would actually call it a different culture? Again, there is an inherent arbitrariness here that means there is no real answer to the question. I suspect that if you were to take a survey, people’s answers would depend a lot on how old they are.

However, I want to make a pitch for World War II being the natural break point in American culture, if for no other reason than that it would provide a psychological distance that would assuage my discomfort with this video from 1942.

Now, I don’t know what came up for you, but at the time of posting, the related videos that pop up at the end include classics such as “Nazi Duck” and “Bugs Bunny Nips the Nips.”

Also, what’s up with the 1942-era shape of Elmer Fudd’s head?