Category Archives: Extreme Weather

Famine in Somalia

I recently read “Famine in Somalia: Competing imperatives collective failures, 2011-2012”, which (as you may have guessed) picks apart the lead-up and response to the 2011-2012 famine in East Africa. It was fascinating, and I wanted to jot down three quick thoughts / takeaways from the book:

1. The famine was predicted. In fact, even the drought that set up the conditions for a potential famine was predicted. But the famine happened anyway. There were a number of reasons for this, chief among them the rise of Al-Shabaab, a fear of food-aid diversion, and a lack of coordination between remotely-managed disaster responses. But this points to the fact that it’s not only predicting disasters that we need to improve on. We need to consider how we can improve our responses to disasters in ways other than prediction.

2. The climate and disaster response community is already using some information about ENSO life-cycles in disaster management, but they could be doing much more. As the authors note, FEWSNET raised the alarm about a potential famine nearly a year in advance using information that La Niñas, which lead to drought in the Horn of Africa, often follow El Niños. We need to be incorporating this information into our “medium-term” preparedness, not just our disaster responses. For example, we could use this knowledge to know when we may need to pre-position food aid, or develop in-country networks for disaster response. The time it took to develop the in-country networks necessary was a major contributing factor to the famine. We could be using climate information to inform not only the delivery of food stock, but also for allocating time/money towards developing anticipatory in-country connections with key players ahead of an expected emergency.

3. One of the major themes of the book was a focus on bringing accountability to famines. I believe that as long as famines are treated as unforeseeable disasters, this will remain impossible. If, however, we move out of a disaster-response framework, and into a risk-reduction framework we may be able to make headway. If major droughts are viewed as an expected, recurring phenomena then a general preparedness may be reasonably expected. In this framework we need to begin focusing on how long we should expect between droughts. How severe will they be? By posing these questions publicly we can normalize the expectation of drought (and therefore of a planned response). This may provide political accountability for investments in institutions and infrastructure during non-crisis periods. Without a shift in the way we talk about food-security crises, we’re unlikely to see any change.

How (and whether) to Disseminate Climate Forecasts

One topic that I’ve been interested in for a while now, but haven’t yet had the chance to explore in any depth is the way in which we disseminate drought forecasts. In this blog post I’d like to look a little further into how we disseminate, what we disseminate, and whether it makes a difference. The short version is this: we have thought quite a bit about what we provide, but surprisingly little about whether it is effective (cost or otherwise).

Numerous studies have explored the way in which we provide information (see here and edited collections here and here). They have made some real advances in shedding some light on how farmers’ make use of climate forecasts, as well as the estimated impact (i.e. whether farmers changed their responses in the context of a workshop participation).

As it turns out, farmers’ are quite capable of understanding and acting on probabilistic information. For forecasters, this is good news. One question that I would be interested in exploring further is whether the workshops required to train farmers to use forecasts is cost effective. This question relates both to the initial cost, and to the question of information retention. When testing principles in the context of daylong participatory workshops, we are unable to address issues such as usage retention (particularly following forecasts that do not match the eventual seasonal totals).

A related question, raised by a colleague of mine here at IFPRI, is whether we should really be providing the information to individual farmers or if it is more effective to provide the information to regional met agencies. Again, the question is not whether farmers are capable of using the forecasts, but rather whether providing them directly is cost effective in the long run.

The most pressing question, however, is in many ways the most obvious: do climate forecasts improve yields? A rigorous study (read randomized control) of the real-world implications of climate yields is badly needed as a means of addressing whether climate forecasts are effective. Although I understand the desire to provide a high quality product (accurate forecast) in a reliable manner, it is past time that we begin discussing the hard evidence of cost effectiveness.

Much time has been dedicated to studying climate forecasts, but surprisingly little has been invested in understanding what role climate forecasts are likely to play in improving livelihoods. We can’t afford to silo these questions any longer.

A Closer Look at Drought: Why it matters how we measure

How we measure a phenomena, in this case drought, is as important as what we choose to measure. Sensible methods of measurement form the foundation of any critical analysis.

A recent paper by Sheffield et al. (2012) focuses on this topic, in particular with applications to the Palmer Drought Severity Index (PDSI), a common index used to monitor the onset and progression of meteorological drought conditions. Sheffield et al assert that because the index uses a rudimentary method for estimating evaporation it has an overly pessimistic view of what drought has been in recent decades. Sheffield argues that globally drought has not increased in recent decades when a more physically based method of estimating evaporation is used.

Criticism of PDSI for its simplistic method of measuring evaporation is not new. In fact, Dai and others have researched this same issue but came to the conclusion that using a more physically based method yields results comparable to current estimates. There have been numerous potential explanations for the difference in conclusions, but that is a conversation for another post. For a critique of the Sheffield paper see here, with reactions from Kevin Trenberth and Aiguo Dai. A short summary of the paper may also be found here.

Regardless of the Sheffield paper, it is critical that we continue to evaluate how we measure drought, particularly as dynamics driving multi-season drought evolve. What was once an accurate index (or indicator) of drought for any given region will not necessarily always be so as the climate changes. Although the reasons for this are many, one is that indexes are pared down to be understandable and operationally useful. In the case of the PDSI, what some see as an extraneous level of complexity, others see as a necessity to accurately represent drought. Whether or not this is actually the case undoubtedly warrants further study, both in how the physical phenomena responsible for drought evolve in a changing climate and how those relationships are reflected (or absent) in various drought indices.

Of particular interest to me is how faithfully the PDSI reflects changes in the hydrological cycle as the phenomena driving observed drying trends change and as the accepted definition of normal is forced to evolve. Altered seasonal snow melt, shifts in large scale atmospheric circulation and increased atmospheric moisture carrying capacity will have undeniable consequences for the hydroclimate in many regions. How these changes are reflected in the PDSI and how sensitive it is to the complexity of its mathematical underpinnings has real world implications for agriculture and water managers, particularly those across the US who use the index as a means of triggering drought relief.

A Closer Look at Drought: Defining Drought

“Research in the early 1980s uncovered more than 150 published definitions of drought. The definitions reflect differences in regions, needs and disciplinary approaches”           The National Drought Mitigation Center describing (Wilhite and Glantz, 1985)

As news agencies across the country have begun reporting the record heat and drought that plagued much of the U.S. throughout 2012 (see two articles from NYT, one on drought and one on heat) I thought it would be worthwhile to take a closer look at the issue by focusing on how we measure and report drought. The statistics used to underscore just how extensive the drought has been often belie the intricacies that go into measuring such events. Although this may seem semantic, how we report drought (and indeed what we report) has an immediate impact on farmers, water resource managers and tax payers. In the first part of this two-part post I’ll focus on what we report, and how drought affects different stakeholders.

To give an idea of how extensive the recent drought has become, and how different sources are thinking about and reporting the event, it is worthwhile to take a look at some of the statistics used to describe the drought. Being that 2012 recently came to a close, one metric that has been widely circulating is the year’s average temperature. The average temperature across the continental U.S. in 2012 (55.3 F) exceeded the previous record by a full degree Fahrenheit. According to an article published in EOS (Karl et al., 2012): “As of September [2012], every month since June 2011 had above normal average temperatures” (meaning top 1/3 according to data collected since 1895) “a record that is unprecedented” . As of January 1st 2013, more than 61% of the country was still experiencing Moderate-Exceptional drought according to the U.S. Drought Monitor. Although snow storms have swept across the Midwest recently, according to the Drought Monitor such precipitation is “enough to arrest further deterioration but insufficient to improve the drought depiction.  Precipitation in Oklahoma had little impact on reservoir and lake levels, and agricultural reports indicated that soil moisture remained depleted”.The USDA described the drought as “seriously affecting U.S. agriculture, with impacts on the crop and livestock sectors and with the potential to affect food prices at the retail level.”

Although statistics such as those above may be taken together to describe a single event, they in fact allude to all four major categories of drought – meteorological, agricultural, hydrological and socioeconomic – described by Wilhite and Glantz in 1985.

Meteorological Drought is often defined as the relative dryness of a region compared to the expected seasonal precipitation. This measure requires the definition of a normal or baseline period and as such is region specific.

Agricultural Drought links agriculture to the impacts that meteorological or hydrological drought. This definition is often dependent on the susceptibility of crops during the growing season (i.e. topsoil is relevant for planting but subsoil moisture is more important for maturing plants)

Hydrological Drought focuses on the effects that deficits in precipitation have on the hydrological system. These droughts are often out of phase with meteorological or agricultural drought because a below normal precipitation cycle will take time to work its way through the hydrologic cycle.

Socioeconomic Drought ties the impacts of hydrological, agricultural and meteorological drought to economic impacts. Socioeonomic drought occurs when demand outstrips supply as a result of drought. This definition incorporates the spatial and temporal distribution of supply and demand into the classification of drought.

[For a more complete description of these four main categories of drought, see here]

While news sources will often report some form of meteorological drought because it is immediately intuitive, it is important to consider the cases in which different definitions of drought will provide different perspectives on current conditions. Reservoir managers and hydroelectric operators, for example, will be more interested in tracking hydrological drought than meteorological or agricultural drought. A drought that is relatively short-lived but ill-timed for planting or growing crops may have serious agricultural and socioeconomic implications without posing much of an issue for those concerned with long-term hydrological conditions. Similarly, multi-year droughts can pose serious hydrological concerns (i.e. reservoir depletion) that will not necessarily be reflected in the meteorological or agricultural measures of drought. As seasonal rainfall returns to the expected quantity following a long drought, meteorological and agricultural drought will be alleviated long before hydrological conditions are restored.

To a certain extent, all of these modes of drought are dependent on defining a climatologically “normal” baseline. Although that may seem straight forward, it is not always as intuitive as one might think. Part of the problem is that low-frequency variability often operates on time scales longer than recorded histories of hydrologic measurements which are in turn longer than the baselines established by individual experience alone. The allocation of water rights to the Colorado River is an iconic example of incorrectly estimating a hydrologic baseline. Flow from the river was measured and a baseline established during an anomalously wet period, thus over-allocating the river (and although the scientific overestimation of the streamflow was substantially compounded by political jockeying from interested stakeholders, the compact still provides an example of how easily a hydrological baseline can be misjudged). Compounding the definition of a scientific baseline is the establishment of an internal baseline. It is difficult to establish an internal perception of “normal” that runs counter to our own personal experiences. Although a common perception of “normal” conditions may play a fairly insignificant role in terms of scientific consensus, they play an enormous role in creating public pressure for political action.

Changes in the hydroclimate (either gradual or abrupt changes) further confound efforts to define drought by making the definition of “normal” a bit of a moving target. The changes occurring in such systems may be characterized in two ways: (1) the variability in the system is changing (a non-stationary system), or (2) the mean state in the system is changing (system with a mean trend). The issue of a shifting normal state is particularly relevant for the American West as climatologists study the causes of past drought in order to forecast future conditions. In July 2011 Richard Seager, a climate scientist at Columbia University and long a prominent voice on drought, was asked about the possibility of a perpetual drought. He had this to say: “You can’t really call it a drought because that implies a temporary change. The models show a progressive aridification. You don’t say, ‘The Sahara is in drought.’ It’s a desert. If the models are right, then the Southwest will face a permanent drying out”.

As arid conditions persist across much of the Continental US, we need to take the time to evaluate how we define this ongoing dry-spell in the context of a changing climate. The way in which we refer to these events has implications beyond semantics. Our language reflects what we expect the consequences of such events to be and the frequency with which we expect them to occur in the future.

How we talk about Hurricane Sandy

As perhaps I should have expected, Google provided an incredibly useful, accessible tool for visualizing information immediately leading up to and in the wake of Hurricane Sandy. The tool provided the projected the track, intensity, precipitation and storm surge associated with the event. Even more useful, it included Red Cross shelters, FEMA shelters and food distribution points. Although this is a particularly powerful example of how social media and the web may aggregate available information to anticipate and respond to extreme weather, as a country we need to shift from a mindset of disaster response to one of disaster prevention.

Calling events like Hurricane Sandy the “New Normal” is not only scientifically simplistic, – see Curtis Brainard’s and Andrew Revikin’s discussion of Hurricane Sandy in the context of climate change – psychologically it implies that we should simply accept the increasing cost of extreme weather events (a cost that has largely come from increasing our exposure to extreme weather as opposed to an increase in the incidence of that weather). There is little that could be more dangerous than implying that as the climate changes there is neither anything we can do to mitigate that change, nor anything we can do to reduce our vulnerability to that change. When communicating projections of changing intensity and distributions of extreme events, language should be chosen very carefully. The risk management community has already provided numerous studies on the effects that word choice has on public perception as they have sought to express the risks posed by low frequency, high intensity events.

The risk posed by extreme weather is often obtuse and difficult to explain. People rarely understand how dangerous an area may be unless they have lived through a disaster, but that does not mean that the scientific community has no responsibility to work on effective communication. There must be a concerted effort on the part of the scientific community to develop a lexicon appropriate for the public, and one appropriate for decision makers. Without this effort we risk falling further into a mindset of reaction. Into a mindset that decouples our actions from the losses that ensue. There are too many examples of this already.

Perhaps the most frustrating example of policy reflecting a complacent acceptance of increasingly frequent weather-related losses is flood zone development. In this case it is often not an issue of risk communication, anyone that cantilevers a restaurant out over the ocean can take a look out the window at high tide to get an idea of flood risk. Despite a clear understanding and communication of potential losses, zoning continues to encourage development in high risk areas. An Article in the Huff Post does a particularly good job of detailing the paradoxical calls to action made by Mayor Bloomberg in the midst of continuing coastal development in NYC. The article cites at least a half dozen ways in which the city and the state pioneered funding research to characterize the risks posed by sea level rise and storm surge, only to promptly dismiss the most substantive concerns raised by these reports.

As a country we need to firmly establish a starting point for bringing our policies on coastal development in line with our risk assessment research. Although altering already developed coasts is a more difficult conversation, wrought with both moral and political issues, preventing knowingly locating people in the path of a disaster should be less controversial. We can’t afford to continue sending the message through implemented policies that coastal development is a matter of economic interest only. There is a certain audacity to pausing the breakneck development of coastal areas only long enough to grieve for those lost in the storm before resuming work as before. I’m not implying a halt to development, only a balance of the economic benefits with the physical risks.