Without a careful explanation about what it means, this drive for consensus can leave the IPCC vulnerable to outside criticism. Claims such as ‘2,500 of the world’s leading scientists have reached a consensus that human activities are having a significant influence on the climate’ are disingenuous. That particular consensus judgement, as are many others in the IPCC reports, is reached by only a few dozen experts in the specific field of detection and attribution studies; other IPCC authors are experts in other fields. But consensus- making can also lead to criticism for being too conservative, as Hansen (2007) has most visibly argued. Was the IPCC AR4 too conservative in reaching its consensus about future sea-level rise? Many glaciologists and oceanographers think they were (Kerr, 2007; Rahmstorf, 2010), leading to what Hansen attacks as ‘scientific reticence’. Solomon et al. (2008) offer a robust defence, stating that far from reaching a premature consensus, the AR4 report stated that in fact no consensus could be reached on the magnitude of the possible fast ice-sheet melt processes that some fear could lead to 1 or 2 metres of sea-level rise this century. Hence these processes were not included in the quantitative estimates.
This leads onto the question of how uncertainty more generally has been treated across the various IPCC Working Groups. As Ha-Duong et al. (2007) and Swart et al. (2009) explain, despite efforts by the IPCC leadership to introduce a consistent methodology for uncertainty communication (Moss & Schneider, 2000; Manning, 2006), it has in fact been impossible to police. Different Working Groups, familiar and comfortable with different epistemic traditions, construct and communicate uncertainty in different ways. This opens up possibilities for confusion and misunderstanding not just for policy-makers and the public, but among the experts within the IPCC itself (Risbey & Kandlikar, 2007).
For Ha-Duong et al. (2007) this diversity is an advantage: “The diverse, multi- dimensional approach to uncertainty communication used by IPCC author teams is not only legitimate, but enhances the quality of the assessment by providing information about the nature of the uncertainties” (p.10). This position reflects that of others who have thought hard about how best to construct uncertainty for policy-relevant assessments (Van der Sluijs, 2005; Van der Sluijs et al., 2005). For these authors ‘taming the uncertainty monster’ requires combining quantitative and qualitative measures of uncertainty in model-based environmental assessment: the so-called NUSAP (Numerical, Unit, Spread, Assessment, Pedigrees) System (Funtowicz & Ravetz, 1990). Webster (2009) agrees with regard to the IPCC: “Treatment of uncertainty will become more important than consensus if the IPCC is to stay relevant to the decisions that face us” (p.39). Yet Webster also argues that such diverse forms of uncertainty assessment will require much more careful explanation about how different uncertainty metrics are reached; for example the difference between frequentist and Bayesian probabilities and the necessity of expert, and therefore subjective, judgements in any assessment process (see also Hulme, 2009a; Guy & Estrada, 2010).
This suggests that more studies such as Petersen’s detailed investigation of the claim about detection and attribution in the IPCC Third Assessment Report (Petersen, 2010; see also 2000 and 2006) are to be welcomed.