It is very much the case, I think, that many of the comments seem to miss this point — seem, to think, that is, that it’s consistent w/ science’s way of knowing for people “to form whatever opinion … they WANT TO” in the face of uncertainty.

I couldn’t disagree more.

http://www.earth-syst-dynam.net/5/271/2014/esd-5-271-2014.pdf

autisutekh:

post-eldritch-horrorcore:

jayno-eyes:

Unnecessarily gendered things:

  • clothes
  • shampoo
  • babies
  • deodorant

Children’s toys
Colours

genitals

chromosomes

(via stopgenderingchildren)

As a referee, I would not need to offer an independent data analysis and proof that the statistical error would have a major effect on the conclusions. It would’ve been enough just to point out the error. But once the article appears, the burden of proof is reversed. And I think that’s too bad.

we all make errors so the point is to catch and fix them sooner, not to avoid making them entirely or to punish people who make mistakes

And finally, please please just stop saying it is the responsibility of ‘environmentalists’ to come up with tactics to persuade the rest of us, who by implication are perfectly entitled to sit back and not take our responsibilities on this issue seriously unless and until ‘environmentalists’ come up with arguments that are appealing to us in every way. Gaaaaah!

Our analysis concludes that we should stop trying to assess the long-run economics of mitigating climate change since that is unknowable. Instead, modeling work on the economics of mitigating climate change should focus on the details of how to mitigate climate change, beginning now, in a way that minimizes costs and maximizes the well-being of all people on our fragile planet over the short to medium term and, thus, how to create relevant normative scenarios.
http://www.sciencedirect.com/science/article/pii/S0040162514000468

http://www.sciencedirect.com/science/article/pii/S0040162514000468

raising interesting questions about the current forecasts

There is an even more significant problem with Pielke’s analysis. In a nutshell, he addresses trend detection when what we need is event risk assessment. The two would be equivalent if the actuarial data was the only data available pertaining to event risk. But that is far from the case; we often have much more information about risk.

Let me illustrate this with a simple example. Suppose observations showed conclusively that the bear population in a particular forest had recently doubled. What would we think of someone who, knowing this, would nevertheless take no extra precautions in walking in the woods unless and until he saw a significant upward trend in the rate at which his neighbors were being mauled by bears?

The point here is that the number of bears in the woods is presumably much greater than the incidence of their contact with humans, so the overall bear statistics should be much more robust than any mauling statistics. The actuarial information here is the rate of mauling, while the doubling of the bear population represents a priori information. Were it possible to buy insurance against mauling, no reasonable firm supplying such insurance would ignore a doubling of the bear population, lack of any significant mauling trend notwithstanding. And even our friendly sylvan pedestrian, sticking to mauling statistics, would never wait for 95 percent confidence before adjusting his bear risk assessment. Being conservative in signal detection (insisting on high confidence that the null hypothesis is void) is the opposite of being conservative in risk assessment.

http://fivethirtyeight.com/features/mit-climate-scientist-responds-on-disaster-costs-and-climate-change/