# Comment

- Omotunde Johnson
- Published Date:
- April 2002

The paper by Uhl and Monet describes techniques for measuring future credit exposure on derivatives contracts when the value of the derivative contract is correlated with the credit quality of the counterparty to that contract. The existence of such a correlation creates the potential for a situation called “wrong-way” credit exposure—meaning that the exposure increases at just the wrong time.

Standard techniques for estimating potential future credit exposure on derivatives contracts ignore the possibility of such a correlation and recent events have caused market participants to seek to generalize those techniques.

## Background

It will be worthwhile to spend a moment discussing the reason why there is interest in this seemingly arcane issue. In the early days of the swaps markets, traditional commercial bankers viewed swaps and forwards only just as credit risky as a loan. That view waned and traders began viewing such contracts as virtually without credit risk.

Eventually, a consensus emerged that such derivatives feature some credit risk as the result of the compound event of counterparty default and being in the money. Since the circumstance under which the loss would be incurred was a compound event, it was viewed as highly improbable.

Thus, when the market risk associated with derivatives trading garnered a lot of attention in the late 1980s, counterparty credit risk was almost forgotten. The breathtaking growth of the derivatives markets and some well-publicized derivatives trading losses and mishaps caught the attention of regulators worldwide. The Office of the Comptroller of the Currency issues a quarterly report (OCC, 2000) on derivatives activity of commercial banks in the United States. The report draws from quarterly Call Report data filed by U.S. commercial banks and includes notional values of derivatives, revenues associated with derivatives, and estimates of credit risks that are calculated for risk-based capital purposes. Although the data are broad and not particularly useful for many purposes, they convey the sense of dramatic growth that has caught the eye of the public and the regulators. Making the story a little more interesting, however, is the fact that those exposures are primarily accounted for by seven banks. (J.P. Morgan is number two in notional exposures.)

The regulatory community has focused its attention on the measurement of the market risk associated with bank derivatives activity. During the last decade we witnessed an international effort to reform the capital accord to incorporate market risk from derivatives activity. This effort culminated in a market risk rule that requires banks to build internal “value at risk” models. Value at risk modeling has generated a lot of attention, including conferences, articles in trade publications, and scholarly research.

If market risk measurement was a bustling city in the 1990s, counterparty credit risk measurement was something of a sleepy frontier town. It gained some attention from the international regulatory community and was addressed by the international capital accords, but was dealt with through rule-of-thumb multipliers.

The same OCC report mentioned above includes data on the credit exposure and credit losses from derivatives. The data start in 1996 and initially the story is dull—with modest losses and small risk-based capital-estimated credit exposures. Exposures of around one basis point worth of notional and aggregate losses for the banking system were typically under $20 million. In 1998, the first real incidence of bank credit losses from derivatives materialized. After ramping up for a year, losses surged to $445 million, or 12 basis points. (Of course, that was in the aggregate and we cannot presume they were evenly distributed across the banks reporting.) Thus, the issue of credit risk of derivatives (counterparty credit risk) resurfaced.

The events of 1998 have been chronicled widely—the contagion effects of the Asian crisis, Russian default, and the reorganization of Long Term Capital Management (LTCM). From the self-assessment that took place in the derivatives industry in the wake of LTCM, it became apparent that practitioners varied in their degree of sophistication in estimating future credit exposures. The Counterparty Risk Management Policy Group (CRMPG) produced a June 1999 report that listed 12 recommendations concerning the measurement and management of that risk. The fifth recommendation was that financial intermediaries should “upgrade their ability to monitor and … set limits for various measures.” And among the specific issues that the

CRMPG identified was the modeling assumption that the credit quality of the counterparty is independent of the market factors related to the exposure. Uhl and Monet’s paper can be seen as one part of the response to the recommendations by the CRMPG.

At this point, I should make clear that I have a point of view on the issues discussed in this paper. As a bank regulator, my first priority is to focus on the risk of extreme loss rather than expected loss. (I characterize it as follows: my job is about the higher moments of the loss distribution.) The way I have thought about counterparty credit risk in the past is that default probabilities are small, so expected losses are small. Therefore, I have been less concerned about precision in pricing (which addresses expected loss) and more concerned with “maximum” loss—so-called Value at Risk. This is essentially the same concern identified by the CRMPG, when it recommended improved “monitoring and limit setting.”

## Summary of the Paper

### Basic Story

This paper can be understood by reference to a general model of credit risk. Since we are talking about random variables, the magnitude of future credit losses will be governed by a joint distribution of probability of default, exposure, and loss rate. To be very concrete, expected credit loss at some date “t” in the future is the product of three things:

*E _{0} (credit loss_{t}) = (p_{t}) (exposure_{t}) (loss rate_{t})*,

where *p* is the probability of default at some future date *t* and the other terms are self-explanatory. For this discussion, ignore the loss rate (assume that it is 1).

*E _{0} (credit loss _{t}) = (p _{t}) (exposure _{t})*

The expectation is formed using two pieces of information: the time *t* probability of default and the time *t* exposure. If there is a constant (unconditional) probability of default then we can calculate expected loss by multiplying the unconditional default probability times the unconditional expected exposure. That is the way expected counterparty losses have been calculated. If they are not independent, however, expected loss depends on both the conditional probability and the conditional exposure.

### Explanation

I found it useful to think about the “wrong-way” phenomenon by restating the problem in a familiar framework. Looking at the problem in another way convinced me that this is truly a mismeasurement and worthy of analysis.

Suppose that counterparty default can be explained using the following regression equation:

*Z _{i} = aW_{i}+u_{i}*,

where Z is an indicator variable representing whether counterparty *i* will default, specified in this case as a linear function of variables represented by vector *W*. So we have begun by setting up the problem in the same way we set up limited-dependent variable models of defaults. For simplicity, the function is linear with an additive error term *u*. Default occurs if Z>0.

Suppose exposure *(X)* for counterparty *i* was explained by a parallel linear equation, with exogenous variables represented by vector Z and error term *e*:

*X _{i} = bZ_{i} + e_{i}*,

but the key condition is that exposure only exists if default occurs, Z>0.

This is a familiar econometric problem (see, for example, Chapter 22 in Greene, 1993), dealing with sample selection, or a problem of incidental truncation. In this case, the expected exposure can be estimated by finding the expected value of *X*, conditional on whether *X* is observed—conditional on default:

This equation gives us a familiar result from econometrics. In an equation affected by incidental truncation due to sample selection bias, the conditional expected value of exposure is the sum of the fitted regression and a correction for the sample selection impact. The basic insight that is relevant here is that the probability of default and the exposure at time *t* both can be influenced by the same random variable. In other words, a simple regression equation for exposure will feature an omitted variable problem. If there is a variable that affects both exposure and default, it exerts its influence through the error term of the exposure equation and the unconditional expected value of exposure is a statistically biased estimator. Thus, by analogy, the calculation of the expected exposure ignoring the possible dependence of exposure on the factor driving defaults can be misleading.

## Quantification

Not surprisingly, the tough part of the task of this paper is coming up with the quantification of the conditional terms that go into the necessary calculations. This computational work is quite detailed and quite specific to the type of contract under consideration. The authors focus on the calculation of individual credit risk, as opposed to the simulation techniques that are used to evaluate portfolios. Furthermore, they focus on expected exposures, not the full distribution of exposures. Even with this narrow focus, they have to resort to solving specific cases. In the example they present in the paper—forex forwards—they must come up with forward value of forex given defaults. They do this as follows:

- In one case (sovereign default), by resorting to another Morgan study that presumably has measured those conditional forward values. (That, by itself, is quite interesting, because it shows that depreciation,
*given sovereign default*, is bigger for previously higher rated sovereigns.) - In the second case (private firm defaults), by building a Merton-type model of corporate default as a function of forex volatility and correlations between asset value and forex volatility.

Using those measurements and the assumption that the forex rate will display the same volatility from that point forward, they calculate the conditional value of the forward contract. Those conditional values are then weighted by the conditional probabilities of default, derived using Bayes theorem and a lot of parameterizations from other sources. In summary, this paper contains a lot of detailed computational work.

## Comments

I have one general comment, followed by three specific points. The general comment is that this is a good and useful paper. It extends the literature in this admittedly narrow field. It is good because it tackles a small but important computational issue in derivatives credit risk management and shows that the calculations can be done. It is a nice example of the messy intersection of well-founded financial modeling and practical computational problems. Now, consider the specific comments.

The first comment is that, despite all of the detail in the paper, it is impossible to tell exactly what was done. The authors are to be congratulated for taking a step toward clarifying this concept and doing the work necessary to quantify it. This can be thought of as an extension and elaboration of the article by Levin and Levy that appeared in the July 1999 issue of *Risk* magazine.

This paper, like most computational papers, however, suffers from the problem that it is hard to follow if you do not already know how to do it. The critical parameters that are used to calculate the conditional, expected values of the forex contracts—the “residual values”—are the least understandable part of the paper. In the cases that were worked out in the paper, the residual values were either pulled from another source or created using a model that incorporates parameters that appear to be estimated, but were not really explained.

The second comment is that this paper may give the unfortunate impression that the calculations are precise. This work can be thought of as a response to points made by Greg Duffee in his 1996*Journal of Banking and Finance* article: “On Measuring the Credit Risks of Derivative Instruments.” We should not forget, however, the other point in Duffee’s article: he cautions us on the use of models of stochastic behavior of financial variables without recognizing the inherent model uncertainty. Although this is always true, the authors had to go through a lot of steps to get parameters to populate this model. Naive readers may think the answers are precise. Duffee found model errors on the order of 20 percent.

The final comment is that, despite going a long way toward explaining how to handle wrong-way credit risk, this paper does not answer one of the key questions that served as the motivation for looking at this issue. Recall that the CRMPG responded to the Asian/Russian/ LTCM situation by recommending that financial intermediaries should “upgrade their ability to monitor and set limits for various measures.” This paper derives a measure of expected exposures, which is primarily useful in pricing credit. In other words, this paper deals with the first moment of conditional exposures. Limit setting, and risk analysis in general, requires estimates of higher order statistics of the loss, and therefore exposure, distributions. While the knowledge of the first moment of the distribution that this paper delivers is necessary, it is not sufficient for risk control.

## Conclusion

The authors should be applauded and encouraged for this useful and important work. It stands at the always challenging nexus of theoretically sound work and real applied work.

I would like to conclude by revisiting my message on my point of view. This paper shows how hard it is to get answers to the expected exposure question. That is helpful for pricing. As a regulator and implicit risk manager, I am interested in the higher moments of the distribution. I suspect that the increase in difficulty in answering those questions is nonlinear.

Counterparty Risk Management Policy Group1999 “Improving Counterparty Risk Management Practices” (June).

DuffeeGregory1996 “On Measuring Credit Risks of Derivative Instruments,” Journal of Banking and Finance (June) pp. 805–833.

FingerChristopher1999 “Toward a Better Estimation of Wrong-Way Credit Exposure,” RiskMetrics Journal RiskMetrics Group LLC (Spring).

GreeneWilliam H.1993Econometric Analysis (New York: Macmillan 2nd ed.)

LevinR and ArnonLevy1999 “Wrong-Way Credit Exposure,” Risk Magazine (July) pp. 52–55.

Office of the Comptroller of the Currency2000“OCC Bank Derivatives Report, Fourth Quarter 1999.”

^{}1

The following views are my own and do not represent the views of the Office of the Comptroller of the Currency. In addition, these comments are not part of or the result of a bank examination; J.P. Morgan is not a national bank and these comments only reflect information revealed in this paper.