Where did the standard 200bp shock come from (part 1)?

Originally published 2/28/2012 © 2021 Olson Research Associates, Inc.

If there is one thing that’s absolutely standard in any interest rate risk report package it’s the +/-200bp shock. It has been since the mid-1990’s since the regulators suggested it in their 1996 Joint Policy Statement on IRR.

Bank management should ensure that risk is measured over a probable range of potential interest rate changes, including meaningful stress situations…The scenarios used should incorporate a sufficiently wide change in market interest rates (e.g. +/-200 basis points over a one year horizon)…
Joint Policy Statement on IRR, 1996

Fifteen years later, in their 2010 IRR Advisory, the regulators even acknowledged that +/-200 had become the “convention”:

In many cases, static interest rate shocks consisting of parallel shifts in the yield curve of plus and minus 200 basis points may not be sufficient to adequately assess an institution’s IRR exposure. As a result, institutions should regularly assess IRR exposures beyond typical industry conventions, including changes in rates of greater magnitude…
IRR Advisory, 2010

No doubt by explicitly mentioning the 200bp shift the regulators fueled the fire that made it the standard. But why was it suggested in the first place? Was it just someone’s best guess? Or was there at least some data behind the estimate? It turns out that there is some data to back up the suggestion, but ultimately it’s less “scientific” than you might think.

Throughout the late 1990’s and early 2000’s I took every opportunity to read and research various topics related to A/L modeling and management. The question of the origin of the 200bp shift had always intrigued me. Why 200bp? Why not 100bp or 300bp? (Or as one client of ours in the mid ‘90’s insisted…we must use +/-125bp!) Or something else altogether? In 2004 I was preparing for a series of IRR seminars for the FFIEC when I happened upon the book Interest Rate Risk Management by Leonard Matz. Buried in chapter 8 were a few brief paragraphs that discussed the origin of the +/-200bp test. The Fed had apparently published an analysis of historical interest rate volatility from March 1978 to December 1992. By measuring the average change in periodic rates and then expanding the analysis to include multiple standard deviations they determined that a 200bp change would be a reasonable test.

As many of you know I have some background and formal education is in mathematics and computer programming. So this analysis of historical rate volatility peaked my interest. I set out to try and recreate the study. I didn’t want to get too complicated but I did want to verify the origin of the conventional 200bp shift for myself. Could there be such a simple and straightforward explanation for using the 200 shift? It certainly seemed that way.

I started by downloading historical interest rate data from the Fed’s web site. I grabbed the monthly average data for the 1, 2, 3, 5, 7, 10, and 30-year treasuries for March 1978 through December 1992. I calculated the nominal change in rates from month to month. From there it’s a pretty simple matter of calculating the averages and standard deviations. Here’s a link to the spreadsheet on GoogleDocs. After reviewing the data and calculations I drew several conclusions. These are by no means “scientific” conclusions, i.e. I didn’t develop elaborate theories or run extensive tests. These are my opinions based on my observations of the data:

  1. I wasn’t surprised to see that the average nominal change from month-to-month for any given rate was close to zero. Sometimes rates go up, other times they go down. Over time things average out to about zero.

  2. If I make the assumption that the interest rate change data is normally distributed (think “bell curve”) then I know that almost all of my measurements will be within three standard deviations of the average (or mean). If history is a good indicator of future rates then I can multiply one standard deviation by three and approximate the greatest rate change I’m likely to see. You can see that the largest change would be 212bp for the 1-year treasury.

  3. Also not surprisingly, the points farther out on the curve showed less volatility. For the 30-year rate three standard deviations was around 114bp. But I also knew that the 30-year had been quite volatile at various points during this time frame. In one month, between January and February 1980, the 30-year treasury moved by +154bp. So clearly large moves were not out of the realm of possibility.

The data by itself doesn’t conclude anything. I drew my own “reasonable” conclusions. Since we’re stress-testing for risk it would make sense to use the worst-case scenario. The 212bp change for the 1-year treasury is the worst case. Since 212bp is approximately equal to 200bp I assumed that I’d found the answer.

It turns out that I was wrong…but only partially. The data and calculations I used were slightly different, but it turns out that my attempt to simplify and standardize was right on target.

Next: Part 2 - Where did the standard 200bp shock come from? >>

(This post is part of a series which provides a basic overview and discussion of interest rate risk stress-testing.)